[jira] [Commented] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380524#comment-15380524
 ] 

Hadoop QA commented on HDFS-10633:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818309/HDFS-10633.002.patch |
| JIRA Issue | HDFS-10633 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 156efb419c6f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5b4a708 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16077/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch, HDFS-10633.002.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10633:
-
Attachment: HDFS-10633.002.patch

Thanks [~ajisakaa] for the review. Because the file {{HDFSDiskbalancer.md}} was 
updated in HDFS-10639, I would like to post a new patch based the latest code. 
In addition, I make a minor change because I found that the '|' was lacked in 
the end of words.

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch, HDFS-10633.002.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380514#comment-15380514
 ] 

Yiqun Lin commented on HDFS-10639:
--

Thanks [~ajisakaa] for the quick review and commit!

> Fix typos in HDFSDiskbalancer.md
> 
>
> Key: HDFS-10639
> URL: https://issues.apache.org/jira/browse/HDFS-10639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10639.001.patch
>
>
> * "sepcifies" -> "specifies"
> * "move.A" -> "move. A"
> * "deafult" -> "default"
> * "themove" -> "the move"
> * "accomodate" -> "accommodate"
> * "replacement.This" -> "replacement. This"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10629) Federation Router

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380497#comment-15380497
 ] 

Hadoop QA commented on HDFS-10629:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 36 unchanged - 
0 fixed = 37 total (was 36) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 213 new + 441 unchanged - 0 fixed = 654 total (was 441) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 18 new + 0 
unchanged - 0 fixed = 18 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 9 new + 7 
unchanged - 0 fixed = 16 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 30] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 43] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 34] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 33] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 39] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 40] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 38] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 45] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 46] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 44] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 35] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 50] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 47] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 29] |
|  |  Sequence of calls to java.util.concurren

[jira] [Commented] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380493#comment-15380493
 ] 

Hudson commented on HDFS-10639:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #10113 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10113/])
HDFS-10639. Fix typos in HDFSDiskbalancer.md. Contributed by Yiqun Lin. 
(aajisaka: rev 5b4a708704b7f6172f087d6cfe43114dfab57f53)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md


> Fix typos in HDFSDiskbalancer.md
> 
>
> Key: HDFS-10639
> URL: https://issues.apache.org/jira/browse/HDFS-10639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10639.001.patch
>
>
> * "sepcifies" -> "specifies"
> * "move.A" -> "move. A"
> * "deafult" -> "default"
> * "themove" -> "the move"
> * "accomodate" -> "accommodate"
> * "replacement.This" -> "replacement. This"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10639:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~linyiqun] for the contribution!

> Fix typos in HDFSDiskbalancer.md
> 
>
> Key: HDFS-10639
> URL: https://issues.apache.org/jira/browse/HDFS-10639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10639.001.patch
>
>
> * "sepcifies" -> "specifies"
> * "move.A" -> "move. A"
> * "deafult" -> "default"
> * "themove" -> "the move"
> * "accomodate" -> "accommodate"
> * "replacement.This" -> "replacement. This"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380483#comment-15380483
 ] 

Akira Ajisaka commented on HDFS-10639:
--

+1, committing this.

> Fix typos in HDFSDiskbalancer.md
> 
>
> Key: HDFS-10639
> URL: https://issues.apache.org/jira/browse/HDFS-10639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-10639.001.patch
>
>
> * "sepcifies" -> "specifies"
> * "move.A" -> "move. A"
> * "deafult" -> "default"
> * "themove" -> "the move"
> * "accomodate" -> "accommodate"
> * "replacement.This" -> "replacement. This"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2016-07-15 Thread Jason Kace (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Kace updated HDFS-10629:
--
Attachment: HDFS-10629.001.patch

Updating patch to fix some jenkins errors.  Simplifying a few parts for easier 
review.

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629.000.patch, HDFS-10629.001.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10626) VolumeScanner prints incorrect IOException in reportBadBlocks operation

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380455#comment-15380455
 ] 

Hadoop QA commented on HDFS-10626:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 74m 
57s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818285/HDFS-10626.003.patch |
| JIRA Issue | HDFS-10626 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 88f15f110a2d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ea9f437 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16074/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16074/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> VolumeScanner prints incorrect IOException in reportBadBlocks operation
> ---
>
> Key: HDFS-10626
> URL: https://issues.apache.org/jira/browse/HDFS-10626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10626.001.patch, HD

[jira] [Commented] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380448#comment-15380448
 ] 

Hadoop QA commented on HDFS-10639:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818290/HDFS-10639.001.patch |
| JIRA Issue | HDFS-10639 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux e2f0b60ffbbf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / da456ff |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16075/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix typos in HDFSDiskbalancer.md
> 
>
> Key: HDFS-10639
> URL: https://issues.apache.org/jira/browse/HDFS-10639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-10639.001.patch
>
>
> * "sepcifies" -> "specifies"
> * "move.A" -> "move. A"
> * "deafult" -> "default"
> * "themove" -> "the move"
> * "accomodate" -> "accommodate"
> * "replacement.This" -> "replacement. This"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10584) Allow long-running Mover tool to login with keytab

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10584:
---
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> Allow long-running Mover tool to login with keytab
> --
>
> Key: HDFS-10584
> URL: https://issues.apache.org/jira/browse/HDFS-10584
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10584-00.patch, HDFS-10584-01.patch
>
>
> The idea of this jira is to support {{mover}} tool the ability to login from 
> a keytab. That way, the RPC client would re-login from the keytab after 
> expiration, which means the process could remain authenticated indefinitely. 
> With some people wanting to run mover non-stop in "daemon mode", that might 
> be a reasonable feature to add. Recently balancer has been enhanced using 
> this feature.
> Thanks [~zhz] for the offline discussions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10599) DiskBalancer: Execute CLI via Shell

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10599:
---
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> DiskBalancer: Execute CLI via Shell 
> 
>
> Key: HDFS-10599
> URL: https://issues.apache.org/jira/browse/HDFS-10599
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>
> DiskBalancer CLI invokes CLI functions directly instead of shell. This is not 
> representative of how end users use it. To provide good unit test coverage, 
> we need to have tests where DiskBalancer CLI is invoked via shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10620) StringBuilder created and appended even if logging is disabled

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10620:
---
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> StringBuilder created and appended even if logging is disabled
> --
>
> Key: HDFS-10620
> URL: https://issues.apache.org/jira/browse/HDFS-10620
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.4
>Reporter: Staffan Friberg
> Attachments: HDFS-10620.001.patch
>
>
> In BlockManager.addToInvalidates the StringBuilder is appended to during the 
> delete even if logging isn't active.
> Could avoid allocating the StringBuilder as well, but not sure if it is 
> really worth it to add null handling in the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage histogram

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10534:
---
Target Version/s: 3.0.0-alpha2  (was: 2.8.0, 2.7.3, 2.9.0, 2.6.5, 
3.0.0-alpha1)

> NameNode WebUI should display DataNode usage histogram
> --
>
> Key: HDFS-10534
> URL: https://issues.apache.org/jira/browse/HDFS-10534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Zhe Zhang
>Assignee: Kai Sasaki
> Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, 
> HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, 
> HDFS-10534.06.patch, HDFS-10534.07.patch, HDFS-10534.08.patch, Screen Shot 
> 2016-06-23 at 6.25.50 AM.png, Screen Shot 2016-07-07 at 23.29.14.png, 
> table_histogram.html
>
>
> In addition of *Min/Median/Max*, another meaningful metric for cluster 
> balance is DN usage in histogram style.
> Since NN already has provided necessary information to calculate histogram of 
> DN usage, it can be done in JS side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10529) Df reports incorrect usage when appending less than block size

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10529:
---
Target Version/s: 3.0.0-alpha2  (was: 2.8.0, 3.0.0-alpha1)

> Df reports incorrect usage when appending less than block size
> --
>
> Key: HDFS-10529
> URL: https://issues.apache.org/jira/browse/HDFS-10529
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2, 3.0.0-alpha1
>Reporter: Pranav Prakash
>Assignee: Pranav Prakash
>Priority: Minor
>  Labels: datanode, fs, hdfs
> Attachments: HDFS-10529.000.patch
>
>
> Steps to recreate issue:
> 1. Create a 100MB file on HDFS cluster with 128MB blocksize and replication 
> factor 3
> 2. Append 100MB to the file
> 3. Df reports around 900MB even though it should only be around 600MB.
> Looking at the blocks confirms that df is incorrect, as there exist only two 
> blocks on each DN -- a 128MB block and a 72MB block.
> This issue seems to arise because BlockPoolSlice does not account for the 
> delta increase in dfsUsage when an append happens to a partially-filled 
> block, and instead naively adds the total block size. For instance, in the 
> example scenario when when block is "filled" from 100 to 128MB, 
> addFinalizedBlock() in BlockPoolSlice adds the size of the newly created 
> block into the total instead of accounting for the difference/delta in block 
> size between old and new.  This has the effect of double-counting the old 
> partially-filled block: it is counted once when it is first created (in the 
> example scenario when the 100MB file is created) and again when it becomes 
> part of the filled block (in the example scenario when the 128MB block is 
> formed form the initial 100MB block). Thus the perceived size becomes 100MB + 
> 128MB + 72 = 300 MB for each DN, or 900MB across the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10533) Make DistCpOptions class immutable

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10533:
---
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> Make DistCpOptions class immutable
> --
>
> Key: HDFS-10533
> URL: https://issues.apache.org/jira/browse/HDFS-10533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10533.000.patch, HDFS-10533.000.patch, 
> HDFS-10533.001.patch, HDFS-10533.002.patch
>
>
> Currently the {{DistCpOptions}} class encapsulates all DistCp options, which 
> may be set from command-line (via the {{OptionsParser}}) or may be set 
> manually (eg construct an instance and call setters). As there are multiple 
> option fields and more (e.g. [HDFS-9868], [HDFS-10314]) to add, validating 
> them can be cumbersome. Ideally, the {{DistCpOptions}} object should be 
> immutable. The benefits are:
> # {{DistCpOptions}} is simple and easier to use and share, plus it scales well
> # validation is automatic, e.g. manually constructed {{DistCpOptions}} gets 
> validated before usage
> # validation error message is well-defined which does not depend on the order 
> of setters
> This jira is to track the effort of making the {{DistCpOptions}} immutable by 
> using a Builder pattern for creation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8065) Erasure coding: Support truncate at striped group boundary

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8065:
--
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> Erasure coding: Support truncate at striped group boundary
> --
>
> Key: HDFS-8065
> URL: https://issues.apache.org/jira/browse/HDFS-8065
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Rakesh R
> Attachments: HDFS-8065-00.patch, HDFS-8065-01.patch
>
>
> We can support truncate at striped group boundary firstly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10436) dfs.block.access.token.enable should default on when security is !simple

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10436:
---
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> dfs.block.access.token.enable should default on when security is !simple
> 
>
> Key: HDFS-10436
> URL: https://issues.apache.org/jira/browse/HDFS-10436
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Yiqun Lin
> Attachments: HDFS-10436.001.patch
>
>
> Unless there is a valid configuration where dfs.block.access.token.enable is 
> off and security is on, then rather than shutdown we should just enable the 
> block access tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10489:
---
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
> ---
>
> Key: HDFS-10489
> URL: https://issues.apache.org/jira/browse/HDFS-10489
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>
> When working on HADOOP-13155, we 
> [discussed|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15315117&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15315117]
>  and concluded that we should use the common config key for key provider uri.
> We can depreate the dfs. key for 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10639:
---
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> Fix typos in HDFSDiskbalancer.md
> 
>
> Key: HDFS-10639
> URL: https://issues.apache.org/jira/browse/HDFS-10639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-10639.001.patch
>
>
> * "sepcifies" -> "specifies"
> * "move.A" -> "move. A"
> * "deafult" -> "default"
> * "themove" -> "the move"
> * "accomodate" -> "accommodate"
> * "replacement.This" -> "replacement. This"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10640) Edit LOG statements to use sl4j templating ({}

2016-07-15 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-10640:
-

 Summary: Edit LOG statements to use  sl4j templating ({}
 Key: HDFS-10640
 URL: https://issues.apache.org/jira/browse/HDFS-10640
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10640) Modify LOG statements to use sl4j templates.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10640:
--
Summary: Modify LOG statements to use  sl4j templates.  (was: Edit LOG 
statements to use  sl4j templating ({})

> Modify LOG statements to use  sl4j templates.
> -
>
> Key: HDFS-10640
> URL: https://issues.apache.org/jira/browse/HDFS-10640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10638) Modifications to remove the assumption that StorageLocation is associated with java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10638:
--
Description: Changes to make sure that {{StorageLocation}} need not be 
associated with a {{java.io.File}}. 

> Modifications to remove the assumption that StorageLocation is associated 
> with java.io.File.
> 
>
> Key: HDFS-10638
> URL: https://issues.apache.org/jira/browse/HDFS-10638
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10638.001.patch
>
>
> Changes to make sure that {{StorageLocation}} need not be associated with a 
> {{java.io.File}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10638) Modifications to remove the assumption that StorageLocation is associated with java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10638:
--
Description: Changes to ensure that {{StorageLocation}} need not be 
associated with a {{java.io.File}}.   (was: Changes to make sure that 
{{StorageLocation}} need not be associated with a {{java.io.File}}. )

> Modifications to remove the assumption that StorageLocation is associated 
> with java.io.File.
> 
>
> Key: HDFS-10638
> URL: https://issues.apache.org/jira/browse/HDFS-10638
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10638.001.patch
>
>
> Changes to ensure that {{StorageLocation}} need not be associated with a 
> {{java.io.File}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10638) Modifications to remove the assumption that StorageLocation is associated with java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10638:
--
Attachment: HDFS-10638.001.patch

> Modifications to remove the assumption that StorageLocation is associated 
> with java.io.File.
> 
>
> Key: HDFS-10638
> URL: https://issues.apache.org/jira/browse/HDFS-10638
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10638.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10638) Modifications to remove the assumption that StorageLocation is associated with java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380417#comment-15380417
 ] 

Virajith Jalaparti commented on HDFS-10638:
---

The patch aims to 
# associate {{StorageDirectory}} with {{StorageLocation}}, instead of 
{{java.io.File}} (replacing calls to {{StorageDirectory#getRoot}} by 
{{StorageDirectory#getStorageLocation}}) 
# remove references to {{StorageLocation#getFile}} 

so that {{StorageLocation}} need not be associated with a {{java.io.File}}. 

> Modifications to remove the assumption that StorageLocation is associated 
> with java.io.File.
> 
>
> Key: HDFS-10638
> URL: https://issues.apache.org/jira/browse/HDFS-10638
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10638.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10639:
-
Status: Patch Available  (was: Open)

Thanks [~ajisakaa] for reporting this. Attach a simple patch.

> Fix typos in HDFSDiskbalancer.md
> 
>
> Key: HDFS-10639
> URL: https://issues.apache.org/jira/browse/HDFS-10639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-10639.001.patch
>
>
> * "sepcifies" -> "specifies"
> * "move.A" -> "move. A"
> * "deafult" -> "default"
> * "themove" -> "the move"
> * "accomodate" -> "accommodate"
> * "replacement.This" -> "replacement. This"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10639:
-
Attachment: HDFS-10639.001.patch

> Fix typos in HDFSDiskbalancer.md
> 
>
> Key: HDFS-10639
> URL: https://issues.apache.org/jira/browse/HDFS-10639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-10639.001.patch
>
>
> * "sepcifies" -> "specifies"
> * "move.A" -> "move. A"
> * "deafult" -> "default"
> * "themove" -> "the move"
> * "accomodate" -> "accommodate"
> * "replacement.This" -> "replacement. This"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HDFS-10639:


Assignee: Yiqun Lin

> Fix typos in HDFSDiskbalancer.md
> 
>
> Key: HDFS-10639
> URL: https://issues.apache.org/jira/browse/HDFS-10639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Trivial
>  Labels: newbie
>
> * "sepcifies" -> "specifies"
> * "move.A" -> "move. A"
> * "deafult" -> "default"
> * "themove" -> "the move"
> * "accomodate" -> "accommodate"
> * "replacement.This" -> "replacement. This"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9213) Minicluster with Kerberos generates some stacks when checking the ports

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9213:
--
Fix Version/s: (was: 3.0.0-alpha1)

> Minicluster with Kerberos generates some stacks when checking the ports
> ---
>
> Key: HDFS-9213
> URL: https://issues.apache.org/jira/browse/HDFS-9213
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0-alpha1
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Attachments: hdfs-9213.v1.patch, hdfs-9213.v1.patch
>
>
> When using the minicluster with kerberos the various checks in 
> SecureDataNodeStarter fail because the ports are not fixed.
> Stacks like this one:
> {quote}
> java.lang.RuntimeException: Unable to bind on specified streaming port in 
> secure context. Needed 0, got 49670
>   at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:108)
> {quote}
> There is already a setting to desactivate this type of check for testing, it 
> could be used here as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10577) DiskBalancer: Support building imbalanced MiniDFSCluster from JSON

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10577:
---
Fix Version/s: (was: 3.0.0-alpha1)

> DiskBalancer: Support building imbalanced MiniDFSCluster from JSON
> --
>
> Key: HDFS-10577
> URL: https://issues.apache.org/jira/browse/HDFS-10577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> To build an imbalanced MiniDFSCluster, there are much work to do (e.g. 
> TestDiskBalancer#testDiskBalancerEndToEnd). It's even more given tens of data 
> nodes are built, on the other hand, Diskbalancer data model can easily dump 
> and build any kinds of imbalanced cluster (e.g. 
> data-cluster-64node-3disk.json used by TestDiskBalancerCommand#setUp). This 
> proposes to support building imbalanced MiniDFSCluster from dumped JSON file 
> to make writing tests easy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10599) DiskBalancer: Execute CLI via Shell

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10599:
---
Fix Version/s: (was: 3.0.0-alpha1)

> DiskBalancer: Execute CLI via Shell 
> 
>
> Key: HDFS-10599
> URL: https://issues.apache.org/jira/browse/HDFS-10599
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>
> DiskBalancer CLI invokes CLI functions directly instead of shell. This is not 
> representative of how end users use it. To provide good unit test coverage, 
> we need to have tests where DiskBalancer CLI is invoked via shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8555) Random read support on HDFS files using Indexed Namenode feature

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8555:
--
Fix Version/s: (was: 3.0.0-alpha1)

> Random read support on HDFS files using Indexed Namenode feature
> 
>
> Key: HDFS-8555
> URL: https://issues.apache.org/jira/browse/HDFS-8555
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Affects Versions: 2.5.2
> Environment: Linux
>Reporter: amit sehgal
>Assignee: Afzal Saan
>   Original Estimate: 720h
>  Remaining Estimate: 720h
>
> Currently Namenode does not provide support to do random reads. With so many 
> tools built on top of HDFS solving the use case of Exploratory BI and 
> providing SQL over HDFS. The need of hour is to reduce the number of blocks 
> read for a Random read. 
> E.g. extracting say 10 lines worth of information out of 100GB files should 
> be reading only those block which can potentially have those 10 lines.
> This can be achieved by adding a tagging feature per block in name node, each 
> block written to HDFS will have tags associated to it stored in index.
> Namednode when access via the Indexing feature will use this index native to 
> reduce the no. of block returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10626) VolumeScanner prints incorrect IOException in reportBadBlocks operation

2016-07-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10626:
-
Attachment: HDFS-10626.003.patch

I make a improvement based on v02 patch, print the bad block's IOException as 
well. Post the new patch.

> VolumeScanner prints incorrect IOException in reportBadBlocks operation
> ---
>
> Key: HDFS-10626
> URL: https://issues.apache.org/jira/browse/HDFS-10626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10626.001.patch, HDFS-10626.002.patch, 
> HDFS-10626.003.patch
>
>
> VolumeScanner throws incorrect IOException in {{datanode.reportBadBlocks}}. 
> The related codes:
> {code}
> public void handle(ExtendedBlock block, IOException e) {
>   FsVolumeSpi volume = scanner.volume;
>   ...
>   try {
> scanner.datanode.reportBadBlocks(block, volume);
>   } catch (IOException ie) {
> // This is bad, but not bad enough to shut down the scanner.
> LOG.warn("Cannot report bad " + block.getBlockId(), e);
>   }
> }
> {code}
> The IOException that printed in the log should be {{ie}} rather than {{e}} 
> which was passed in method {{handle(ExtendedBlock block, IOException e)}}.
> It will be a important info that can help us to know why datanode 
> reporBadBlocks failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10639:
-
Description: 
* "sepcifies" -> "specifies"
* "move.A" -> "move. A"
* "deafult" -> "default"
* "themove" -> "the move"
* "accomodate" -> "accommodate"
* "replacement.This" -> "replacement. This"

  was:
* "sepcifies" -> "specifies"
* "move.A" -> "move. A"
* "deafult" -> "default"
* "themove" -> "the move"
* "accomodate" -> "accommodate"


> Fix typos in HDFSDiskbalancer.md
> 
>
> Key: HDFS-10639
> URL: https://issues.apache.org/jira/browse/HDFS-10639
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Trivial
>  Labels: newbie
>
> * "sepcifies" -> "specifies"
> * "move.A" -> "move. A"
> * "deafult" -> "default"
> * "themove" -> "the move"
> * "accomodate" -> "accommodate"
> * "replacement.This" -> "replacement. This"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10639) Fix typos in HDFSDiskbalancer.md

2016-07-15 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-10639:


 Summary: Fix typos in HDFSDiskbalancer.md
 Key: HDFS-10639
 URL: https://issues.apache.org/jira/browse/HDFS-10639
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira Ajisaka
Priority: Trivial


* "sepcifies" -> "specifies"
* "move.A" -> "move. A"
* "deafult" -> "default"
* "themove" -> "the move"
* "accomodate" -> "accommodate"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10637:
--
Description: Modifications to {{FsVolumeSpi}} and {{FsVolumeImpl}} to 
remove references to {{java.io.File}}.

> Modifications to remove the assumption that FsVolumes are backed by 
> java.io.File.
> -
>
> Key: HDFS-10637
> URL: https://issues.apache.org/jira/browse/HDFS-10637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10637.001.patch
>
>
> Modifications to {{FsVolumeSpi}} and {{FsVolumeImpl}} to remove references to 
> {{java.io.File}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-15 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380375#comment-15380375
 ] 

Akira Ajisaka commented on HDFS-10633:
--

LGTM, +1.

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380362#comment-15380362
 ] 

Virajith Jalaparti edited comment on HDFS-10637 at 7/16/16 12:30 AM:
-

The patch contains the following changes (text is incorporated from comments in 
HDFS-9809): 
# Instead of associating an FsVolume with a base path (which is a 
{{java.io.File}}), we associate it with a {{StorageLocation}}. This allows 
removing the dependence on {{java.io.File}} and replacing it with a more 
general one, which can point to a {{java.io.File}} or an abstract {{URI}} 
representing an external storage. Using {{StorageLocation}} instead of defining 
a new type for location allows us to reuse its functionality and plug into the 
rest of the code easily. Following this intuition, we replaced 
{{FsVolumeSpi#getBasePath}} with {{FsVolumeSpi#getStorageLocation}}. As a 
result, comparisons and references to FsVolumes which were done using the 
{{java.io.File}} returned by {{FsVolumeSpi#getBasePath}} are now replaced by 
comparisons and references to the {{StorageLocation}} returned by 
{{FsVolumeSpi#getStorageLocation}}. 
# Similarly, the patch also replaces calls to get*Dir on FsVolumeImpl. In 
particular, the {{DirectoryScanner.ReportCompiler}} calls on 
{{FsVolumeSpi#getFinalizedDir}} and compiles the report assuming that this 
returns a {{java.io.File}}. However, in HDFS-9806, data may not be stored in 
files. Further, the {{DirectoryScanner.ReportCompiler#compileReport}} function 
assumes the way blocks are stored in FsVolumes which can be different for 
different {{FsVolumeSpi}} implementations. To address these assumptions and to 
hide the details on how volumes implement block storage, 
{{ReportCompiler#compileReport}} is moved to {{FsVolumeSpi}}.
# A {{FsVolumeImplBuilder}} is added and calls to the constructor of 
{{FsVolumeImpl}} are replaced with those to the builder. The idea behind this 
is to construct the appropriate volume based on the {{StorageLocation}} 
(specified using {{dfs.datanode.data.dir}}). For example, as part of HDFS-9806, 
if a {{StorageLocation}} is of a PROVIDED type, we would construct a 
{{ProvidedVolumeImpl}}. Otherwise, a {{FsVolumeImpl}} would be built. 



was (Author: virajith):
The patch contains the following changes (some text is incorporated from 
comments in HDFS-9809): 
# Instead of associating an FsVolume with a base path (which is a 
{{java.io.File}}), we associate it with a {{StorageLocation}}. This allows 
removing the dependence on {{java.io.File}} and replacing it with a more 
general one, which can point to a {{java.io.File}} or an abstract {{URI}} 
representing an external storage. Using {{StorageLocation}} instead of defining 
a new type for location allows us to reuse its functionality and plug into the 
rest of the code easily. Following this intuition, we replaced 
{{FsVolumeSpi#getBasePath}} with {{FsVolumeSpi#getStorageLocation}}. As a 
result, comparisons and references to FsVolumes which were done using the 
{{java.io.File}} returned by {{FsVolumeSpi#getBasePath}} are now replaced by 
comparisons and references to the {{StorageLocation}} returned by 
{{FsVolumeSpi#getStorageLocation}}. 
# Similarly, the patch also replaces calls to get*Dir on FsVolumeImpl. In 
particular, the {{DirectoryScanner.ReportCompiler}} calls on 
{{FsVolumeSpi#getFinalizedDir}} and compiles the report assuming that this 
returns a {{java.io.File}}. However, in HDFS-9806, data may not be stored in 
files. Further, the {{DirectoryScanner.ReportCompiler#compileReport}} function 
assumes the way blocks are stored in FsVolumes which can be different for 
different {{FsVolumeSpi}} implementations. To address these assumptions and to 
hide the details on how volumes implement block storage, 
{{ReportCompiler#compileReport}} is moved to {{FsVolumeSpi}}.
# A {{FsVolumeImplBuilder}} is added and calls to the constructor of 
{{FsVolumeImpl}} are replaced with those to the builder. The idea behind this 
is to construct the appropriate volume based on the {{StorageLocation}} 
(specified using {{dfs.datanode.data.dir}}). For example, as part of HDFS-9806, 
if a {{StorageLocation}} is of a PROVIDED type, we would construct a 
{{ProvidedVolumeImpl}}. Otherwise, a {{FsVolumeImpl}} would be built. 


> Modifications to remove the assumption that FsVolumes are backed by 
> java.io.File.
> -
>
> Key: HDFS-10637
> URL: https://issues.apache.org/jira/browse/HDFS-10637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10637.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

--

[jira] [Commented] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380362#comment-15380362
 ] 

Virajith Jalaparti commented on HDFS-10637:
---

The patch contains the following changes (some text is incorporated from 
comments in HDFS-9809): 
# Instead of associating an FsVolume with a base path (which is a 
{{java.io.File}}), we associate it with a {{StorageLocation}}. This allows 
removing the dependence on {{java.io.File}} and replacing it with a more 
general one, which can point to a {{java.io.File}} or an abstract {{URI}} 
representing an external storage. Using {{StorageLocation}} instead of defining 
a new type for location allows us to reuse its functionality and plug into the 
rest of the code easily. Following this intuition, we replaced 
{{FsVolumeSpi#getBasePath}} with {{FsVolumeSpi#getStorageLocation}}. As a 
result, comparisons and references to FsVolumes which were done using the 
{{java.io.File}} returned by {{FsVolumeSpi#getBasePath}} are now replaced by 
comparisons and references to the {{StorageLocation}} returned by 
{{FsVolumeSpi#getStorageLocation}}. 
# Similarly, the patch also replaces calls to get*Dir on FsVolumeImpl. In 
particular, the {{DirectoryScanner.ReportCompiler}} calls on 
{{FsVolumeSpi#getFinalizedDir}} and compiles the report assuming that this 
returns a {{java.io.File}}. However, in HDFS-9806, data may not be stored in 
files. Further, the {{DirectoryScanner.ReportCompiler#compileReport}} function 
assumes the way blocks are stored in FsVolumes which can be different for 
different {{FsVolumeSpi}} implementations. To address these assumptions and to 
hide the details on how volumes implement block storage, 
{{ReportCompiler#compileReport}} is moved to {{FsVolumeSpi}}.
# A {{FsVolumeImplBuilder}} is added and calls to the constructor of 
{{FsVolumeImpl}} are replaced with those to the builder. The idea behind this 
is to construct the appropriate volume based on the {{StorageLocation}} 
(specified using {{dfs.datanode.data.dir}}). For example, as part of HDFS-9806, 
if a {{StorageLocation}} is of a PROVIDED type, we would construct a 
{{ProvidedVolumeImpl}}. Otherwise, a {{FsVolumeImpl}} would be built. 


> Modifications to remove the assumption that FsVolumes are backed by 
> java.io.File.
> -
>
> Key: HDFS-10637
> URL: https://issues.apache.org/jira/browse/HDFS-10637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10637.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10637:
--
Attachment: HDFS-10637.001.patch

> Modifications to remove the assumption that FsVolumes are backed by 
> java.io.File.
> -
>
> Key: HDFS-10637
> URL: https://issues.apache.org/jira/browse/HDFS-10637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10637.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380342#comment-15380342
 ] 

Virajith Jalaparti edited comment on HDFS-10636 at 7/16/16 12:18 AM:
-

The patch contains the following changes (most text below is aggregated from 
comments in HDFS-9809). 

# Moving the {{java.io.File}} related APIs in {{ReplicaInfo}} 
({{getBlockFile}}, {{getMetaFile}}) to a subclass of {{ReplicaInfo}} called 
{{LocalReplica}}. The classes {{FinalizedReplica}}, {{ReplicaInPipeline}}, 
{{ReplicaUnderRecovery}}, and {{ReplicaWaitingToBeRecovered}} are changed to be 
subclasses of {{LocalReplica}} instead of {{ReplicaInfo}}. The motivation 
behind this change is that we can have {{ReplicaInfo}} s that point to blocks 
located in remote stores and as a result don’t have associated {{java.io.File}} 
s. 
We added various functions to {{ReplicaInfo}} in order to replace the calls to 
{{ReplicaInfo#getBlockFile}}, and {{ReplicaInfo#getMetaFile}} in the rest of 
the code. 
# Using {{ReplicaInfo.getState()}} to get the state of a {{ReplicaInfo}} 
instead of using {{instanceof}}. A related change is to use the class 
{{ReplicaInfo}} to refer to the replica objects instead of the particular 
subclass (this required adding additional abstract functions to the 
{{ReplicaInfo}} class). 
# Addition of a {{ReplicaBuilder}} and replacing calls to the constructors of 
different {{ReplicaInfo}} subclasses ({{ReplicaInPipeline}}, 
{{ReplicaBeingWritten}}, etc.) with calls to the {{ReplicaBuilder}} with the 
appropriate parameters ({{ReplicaState}}, {{blockId}} etc.) set. 
# Changes related to {{ReplicaInPipeline}}
* Change the {{ReplicaInPipeline}} to {{LocalReplicaInPipeline}}, and change 
{{ReplicaInPipelineInterface}} to {{ReplicaInPipeline}}. 
* Add a {{getReplicaInfo}} function to the (new) {{ReplicaInPipeline}} 
interface.
* Move the functions related to writer threads ({{stopWriter}}, 
{{attemptToSetWriter}}, {{interruptThread}} and {{setWriter}}) to the new 
{{ReplicaInPipeline}} interface (i.e., the old {{ReplicaInPipelineInterface}}), 
as only {{ReplicaInPipeline}} objects will be associated with writer threads. 

The idea behind the changes above is to add a new {{ProvidedReplica}} class (an 
implementation of {{ReplicaInfo}}) which can be: 
(a) used to represent replicas stored in a provided storage (described in more 
detail in the design documentation of HDFS-9806).
(b) treated as any other {{ReplicaInfo}} in the rest of the code. This would 
avoid changes to the rest of the Datanode as part of HDFS-9806. 
(c) written to using the existing replication pipeline, without implementing a 
separate write pipeline for HDFS-9806. 



was (Author: virajith):
The patch contains the following changes (most text below is aggregated here 
from comments in HDFS-9809). 

# Moving the {{java.io.File}} related APIs in {{ReplicaInfo}} 
({{getBlockFile}}, {{getMetaFile}}) to a subclass of {{ReplicaInfo}} called 
{{LocalReplica}}. The classes {{FinalizedReplica}}, {{ReplicaInPipeline}}, 
{{ReplicaUnderRecovery}}, and {{ReplicaWaitingToBeRecovered}} are changed to be 
subclasses of {{LocalReplica}} instead of {{ReplicaInfo}}. The motivation 
behind this change is that we can have {{ReplicaInfo}} s that point to blocks 
located in remote stores and as a result don’t have associated {{java.io.File}} 
s. 
We added various functions to {{ReplicaInfo}} in order to replace the calls to 
{{ReplicaInfo#getBlockFile}}, and {{ReplicaInfo#getMetaFile}} in the rest of 
the code. 
# Using {{ReplicaInfo.getState()}} to get the state of a {{ReplicaInfo}} 
instead of using {{instanceof}}. A related change is to use the class 
{{ReplicaInfo}} to refer to the replica objects instead of the particular 
subclass (this required adding additional abstract functions to the 
{{ReplicaInfo}} class). 
# Addition of a {{ReplicaBuilder}} and replacing calls to the constructors of 
different {{ReplicaInfo}} subclasses ({{ReplicaInPipeline}}, 
{{ReplicaBeingWritten}}, etc.) with calls to the {{ReplicaBuilder}} with the 
appropriate parameters ({{ReplicaState}}, {{blockId}} etc.) set. 
# Changes related to {{ReplicaInPipeline}}
* Change the {{ReplicaInPipeline}} to {{LocalReplicaInPipeline}}, and change 
{{ReplicaInPipelineInterface}} to {{ReplicaInPipeline}}. 
* Add a {{getReplicaInfo}} function to the (new) {{ReplicaInPipeline}} 
interface.
* Move the functions related to writer threads ({{stopWriter}}, 
{{attemptToSetWriter}}, {{interruptThread}} and {{setWriter}}) to the new 
{{ReplicaInPipeline}} interface (i.e., the old {{ReplicaInPipelineInterface}}), 
as only {{ReplicaInPipeline}} objects will be associated with writer threads. 

The idea behind the changes above is to add a new {{ProvidedReplica}} class (an 
implementation of {{ReplicaInfo}}) which can be: 
(a) used to represent replicas stored in a provided sto

[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Attachment: HDFS-10636.001.patch

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380342#comment-15380342
 ] 

Virajith Jalaparti edited comment on HDFS-10636 at 7/16/16 12:15 AM:
-

The patch contains the following changes (most text below is aggregated here 
from comments in HDFS-9809). 

# Moving the {{java.io.File}} related APIs in {{ReplicaInfo}} 
({{getBlockFile}}, {{getMetaFile}}) to a subclass of {{ReplicaInfo}} called 
{{LocalReplica}}. The classes {{FinalizedReplica}}, {{ReplicaInPipeline}}, 
{{ReplicaUnderRecovery}}, and {{ReplicaWaitingToBeRecovered}} are changed to be 
subclasses of {{LocalReplica}} instead of {{ReplicaInfo}}. The motivation 
behind this change is that we can have {{ReplicaInfo}} s that point to blocks 
located in remote stores and as a result don’t have associated {{java.io.File}} 
s. 
We added various functions to {{ReplicaInfo}} in order to replace the calls to 
{{ReplicaInfo#getBlockFile}}, and {{ReplicaInfo#getMetaFile}} in the rest of 
the code. 
# Using {{ReplicaInfo.getState()}} to get the state of a {{ReplicaInfo}} 
instead of using {{instanceof}}. A related change is to use the class 
{{ReplicaInfo}} to refer to the replica objects instead of the particular 
subclass (this required adding additional abstract functions to the 
{{ReplicaInfo}} class). 
# Addition of a {{ReplicaBuilder}} and replacing calls to the constructors of 
different {{ReplicaInfo}} subclasses ({{ReplicaInPipeline}}, 
{{ReplicaBeingWritten}}, etc.) with calls to the {{ReplicaBuilder}} with the 
appropriate parameters ({{ReplicaState}}, {{blockId}} etc.) set. 
# Changes related to {{ReplicaInPipeline}}
* Change the {{ReplicaInPipeline}} to {{LocalReplicaInPipeline}}, and change 
{{ReplicaInPipelineInterface}} to {{ReplicaInPipeline}}. 
* Add a {{getReplicaInfo}} function to the (new) {{ReplicaInPipeline}} 
interface.
* Move the functions related to writer threads ({{stopWriter}}, 
{{attemptToSetWriter}}, {{interruptThread}} and {{setWriter}}) to the new 
{{ReplicaInPipeline}} interface (i.e., the old {{ReplicaInPipelineInterface}}), 
as only {{ReplicaInPipeline}} objects will be associated with writer threads. 

The idea behind the changes above is to add a new {{ProvidedReplica}} class (an 
implementation of {{ReplicaInfo}}) which can be: 
(a) used to represent replicas stored in a provided storage (described in more 
detail in the design documentation of HDFS-9806).
(b) treated as any other {{ReplicaInfo}} in the rest of the code. This would 
avoid changes to the rest of the Datanode as part of HDFS-9806. 
(c) written to using the existing replication pipeline, without implementing a 
separate write pipeline for HDFS-9806. 



was (Author: virajith):
The patch contains the following changes (copied here from comments in 
HDFS-9809). 

# Moving the {{java.io.File}} related APIs in {{ReplicaInfo}} 
({{getBlockFile}}, {{getMetaFile}}) to a subclass of {{ReplicaInfo}} called 
{{LocalReplica}}. The classes {{FinalizedReplica}}, {{ReplicaInPipeline}}, 
{{ReplicaUnderRecovery}}, and {{ReplicaWaitingToBeRecovered}} are changed to be 
subclasses of {{LocalReplica}} instead of {{ReplicaInfo}}. The motivation 
behind this change is that we can have {{ReplicaInfo}} s that point to blocks 
located in remote stores and as a result don’t have associated {{java.io.File}} 
s. 
We added various functions to {{ReplicaInfo}} in order to replace the calls to 
{{ReplicaInfo#getBlockFile}}, and {{ReplicaInfo#getMetaFile}} in the rest of 
the code. 
# Using {{ReplicaInfo.getState()}} to get the state of a {{ReplicaInfo}} 
instead of using {{instanceof}}. A related change is to use the class 
{{ReplicaInfo}} to refer to the replica objects instead of the particular 
subclass (this required adding additional abstract functions to the 
{{ReplicaInfo}} class). 
# Addition of a {{ReplicaBuilder}} and replacing calls to the constructors of 
different {{ReplicaInfo}} subclasses ({{ReplicaInPipeline}}, 
{{ReplicaBeingWritten}}, etc.) with calls to the {{ReplicaBuilder}} with the 
appropriate parameters ({{ReplicaState}}, {{blockId}} etc.) set. 
# Changes related to {{ReplicaInPipeline}}
* Change the {{ReplicaInPipeline}} to {{LocalReplicaInPipeline}}, and change 
{{ReplicaInPipelineInterface}} to {{ReplicaInPipeline}}. 
* Add a {{getReplicaInfo}} function to the (new) {{ReplicaInPipeline}} 
interface.
* Move the functions related to writer threads ({{stopWriter}}, 
{{attemptToSetWriter}}, {{interruptThread}} and {{setWriter}}) to the new 
{{ReplicaInPipeline}} interface (i.e., the old {{ReplicaInPipelineInterface}}), 
as only {{ReplicaInPipeline}} objects will be associated with writer threads. 

The idea behind the changes above is to add a new {{ProvidedReplica}} class (an 
implementation of {{ReplicaInfo}}) which can be: 
(a) used to represent replicas stored in a provided storage (described in

[jira] [Commented] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380342#comment-15380342
 ] 

Virajith Jalaparti commented on HDFS-10636:
---

The patch contains the following changes (copied here from comments in 
HDFS-9809). 

# Moving the {{java.io.File}} related APIs in {{ReplicaInfo}} 
({{getBlockFile}}, {{getMetaFile}}) to a subclass of {{ReplicaInfo}} called 
{{LocalReplica}}. The classes {{FinalizedReplica}}, {{ReplicaInPipeline}}, 
{{ReplicaUnderRecovery}}, and {{ReplicaWaitingToBeRecovered}} are changed to be 
subclasses of {{LocalReplica}} instead of {{ReplicaInfo}}. The motivation 
behind this change is that we can have {{ReplicaInfo}} s that point to blocks 
located in remote stores and as a result don’t have associated {{java.io.File}} 
s. 
We added various functions to {{ReplicaInfo}} in order to replace the calls to 
{{ReplicaInfo#getBlockFile}}, and {{ReplicaInfo#getMetaFile}} in the rest of 
the code. 
# Using {{ReplicaInfo.getState()}} to get the state of a {{ReplicaInfo}} 
instead of using {{instanceof}}. A related change is to use the class 
{{ReplicaInfo}} to refer to the replica objects instead of the particular 
subclass (this required adding additional abstract functions to the 
{{ReplicaInfo}} class). 
# Addition of a {{ReplicaBuilder}} and replacing calls to the constructors of 
different {{ReplicaInfo}} subclasses ({{ReplicaInPipeline}}, 
{{ReplicaBeingWritten}}, etc.) with calls to the {{ReplicaBuilder}} with the 
appropriate parameters ({{ReplicaState}}, {{blockId}} etc.) set. 
# Changes related to {{ReplicaInPipeline}}
* Change the {{ReplicaInPipeline}} to {{LocalReplicaInPipeline}}, and change 
{{ReplicaInPipelineInterface}} to {{ReplicaInPipeline}}. 
* Add a {{getReplicaInfo}} function to the (new) {{ReplicaInPipeline}} 
interface.
* Move the functions related to writer threads ({{stopWriter}}, 
{{attemptToSetWriter}}, {{interruptThread}} and {{setWriter}}) to the new 
{{ReplicaInPipeline}} interface (i.e., the old {{ReplicaInPipelineInterface}}), 
as only {{ReplicaInPipeline}} objects will be associated with writer threads. 

The idea behind the changes above is to add a new {{ProvidedReplica}} class (an 
implementation of {{ReplicaInfo}}) which can be: 
(a) used to represent replicas stored in a provided storage (described in more 
detail in the design documentation of HDFS-9806).
(b) treated as any other {{ReplicaInfo}} in the rest of the code. This would 
avoid changes to the rest of the Datanode as part of HDFS-9806. 
(c) written to using the existing replication pipeline, without implementing a 
separate write pipeline for HDFS-9806. 


> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Description: Replace java.io.File related APIs from {{ReplicaInfo}}, and 
enable the definition of new {{ReplicaInfo}} sub-classes whose metadata and 
data can be present on external storages (HDFS-9806).   (was: Replace 
java.io.File related APIs from ReplicaInfo, )

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify DN implementation to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Description: Replace java.io.File related APIs from ReplicaInfo,   (was: 
Remove )

> Modify DN implementation to remove the assumption that replica metadata and 
> data are stored in java.io.File.
> 
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>
> Replace java.io.File related APIs from ReplicaInfo, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Summary: Modify ReplicaInfo to remove the assumption that replica metadata 
and data are stored in java.io.File.  (was: Modify DN implementation to remove 
the assumption that replica metadata and data are stored in java.io.File.)

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>
> Replace java.io.File related APIs from ReplicaInfo, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify DN implementation to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Description: Remove 

> Modify DN implementation to remove the assumption that replica metadata and 
> data are stored in java.io.File.
> 
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>
> Remove 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9809) Abstract implementation-specific details from the datanode

2016-07-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380317#comment-15380317
 ] 

Lei (Eddy) Xu commented on HDFS-9809:
-

Thanks. [~virajith] 

> Abstract implementation-specific details from the datanode
> --
>
> Key: HDFS-9809
> URL: https://issues.apache.org/jira/browse/HDFS-9809
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-9809.001.patch, HDFS-9809.002.patch, 
> HDFS-9809.003.patch, HDFS-9809.004.patch
>
>
> Multiple parts of the Datanode (FsVolumeSpi, ReplicaInfo, FSVolumeImpl etc.) 
> implicitly assume that blocks are stored in java.io.File(s) and that volumes 
> are divided into directories. We propose to abstract these details, which 
> would help in supporting other storages. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9809) Abstract implementation-specific details from the datanode

2016-07-15 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380316#comment-15380316
 ] 

Virajith Jalaparti commented on HDFS-9809:
--

As the patch size has become large, I am creating sub-tasks to keep the patches 
more manageable. Each of the sub-tasks deal with changes to one particular 
class/interface, and the necessary changes that imposes in the rest of the 
code. 

> Abstract implementation-specific details from the datanode
> --
>
> Key: HDFS-9809
> URL: https://issues.apache.org/jira/browse/HDFS-9809
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-9809.001.patch, HDFS-9809.002.patch, 
> HDFS-9809.003.patch, HDFS-9809.004.patch
>
>
> Multiple parts of the Datanode (FsVolumeSpi, ReplicaInfo, FSVolumeImpl etc.) 
> implicitly assume that blocks are stored in java.io.File(s) and that volumes 
> are divided into directories. We propose to abstract these details, which 
> would help in supporting other storages. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-15 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380310#comment-15380310
 ] 

Konstantin Shvachko commented on HDFS-10301:


Reviewed latest patch. Got a few nits:
# In {{BlockManager.removeZombieStorages()}} you should add a check {{if(node 
== null)}}. The node could have been deleted while we were not holding 
{{writeLock}}.
# {{DatanodeDescriptor.removeZombieStorages()}} methods does not need to be 
public. Should be package private.
# Remove empty line change in {{BPServiceActor.blockReport()}}.
Also the comment here is confusing. You might want to clarify it.
# checkstyle warning tells that either {{STORAGE_REPORT}} should be declared 
{{final}} or it should not be all-capital. I think {{final}} makes sense.

Also I think that [~cmccabe]'s veto, formulated as
??I am -1 on a patch which adds extra RPCs.??
is fully addressed now. The storage report was added to the last RPC 
representing a single block report. The last patch does not add extra RPCs.
So I plan to commit this three days from today. Given of course the nits above 
are fixed.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.01.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10326) Disable setting tcp socket send/receive buffers for write pipelines

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380305#comment-15380305
 ] 

Hadoop QA commented on HDFS-10326:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.namenode.TestEditLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818243/HDFS-10326.001.patch |
| JIRA Issue | HDFS-10326 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 59971f725a14 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f5f1c81 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16073/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16073/testReport/ |
| modules | C: hadoop-hd

[jira] [Commented] (HDFS-9809) Abstract implementation-specific details from the datanode

2016-07-15 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380302#comment-15380302
 ] 

Virajith Jalaparti commented on HDFS-9809:
--

Hi [~ehiggs], Thank you for the comments! I will address these in the 
subsequent patches. 

> Abstract implementation-specific details from the datanode
> --
>
> Key: HDFS-9809
> URL: https://issues.apache.org/jira/browse/HDFS-9809
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-9809.001.patch, HDFS-9809.002.patch, 
> HDFS-9809.003.patch, HDFS-9809.004.patch
>
>
> Multiple parts of the Datanode (FsVolumeSpi, ReplicaInfo, FSVolumeImpl etc.) 
> implicitly assume that blocks are stored in java.io.File(s) and that volumes 
> are divided into directories. We propose to abstract these details, which 
> would help in supporting other storages. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10638) Modifications to remove the assumption that StorageLocation is associated with java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-10638:
-

 Summary: Modifications to remove the assumption that 
StorageLocation is associated with java.io.File.
 Key: HDFS-10638
 URL: https://issues.apache.org/jira/browse/HDFS-10638
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify DN implementation to remove the assumption that replica metadata and data are stored in java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Summary: Modify DN implementation to remove the assumption that replica 
metadata and data are stored in java.io.File.  (was: Modify DN implementation 
to remove the assumption that replicas are stored in files.)

> Modify DN implementation to remove the assumption that replica metadata and 
> data are stored in java.io.File.
> 
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-07-15 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-10637:
-

 Summary: Modifications to remove the assumption that FsVolumes are 
backed by java.io.File.
 Key: HDFS-10637
 URL: https://issues.apache.org/jira/browse/HDFS-10637
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10636) Modify DN implementation to remove the assumption that replicas are stored in files.

2016-07-15 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-10636:
-

 Summary: Modify DN implementation to remove the assumption that 
replicas are stored in files.
 Key: HDFS-10636
 URL: https://issues.apache.org/jira/browse/HDFS-10636
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10441) libhdfs++: HA namenode support

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380267#comment-15380267
 ] 

Hadoop QA commented on HDFS-10441:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
57s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
38s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
13s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
13s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
34s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818244/HDFS-10441.HDFS-8707.014.patch
 |
| JIRA Issue | HDFS-10441 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 7bb87c634a8c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / d18e396 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16072/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16072/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.pat

[jira] [Commented] (HDFS-9271) Implement basic NN operations

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380200#comment-15380200
 ] 

Hadoop QA commented on HDFS-9271:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
45s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
46s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 34s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed CTEST tests | 
test_libhdfs_threaded_hdfspp_test_shim_static |
| JDK v1.7.0_101 Failed CTEST tests | 
test_libhdfs_threaded_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818235/HDFS-9271.HDFS-8707.004.patch
 |
| JIRA Issue | HDFS-9271 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux a36bd457e7b6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / d18e396 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16071/artifact/patchprocess/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91-ctest.txt
 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16071/artifact/patchprocess/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16071/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101.txt
 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16071/testReport/ |
| modules | C: hadoop-hdfs-project

[jira] [Commented] (HDFS-10627) Volume Scanner mark a block as "suspect" even if the block sender encounters 'Broken pipe' or 'Connection reset by peer' exception

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380159#comment-15380159
 ] 

Hadoop QA commented on HDFS-10627:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestUpgradeDomainBlockPlacementPolicy |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818222/HDFS-10627.patch |
| JIRA Issue | HDFS-10627 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dda4ed9a372e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c48e9d6 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16070/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16070/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16070/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Volume Scanner mark a block as "suspect" even if 

[jira] [Updated] (HDFS-10441) libhdfs++: HA namenode support

2016-07-15 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10441:
---
Attachment: HDFS-10441.HDFS-8707.014.patch

New patch to make sure failover count is actually incremented for all types of 
failovers, IncrementFailoverCount implicitly resets the retry count as 
[~bobhansen] suggested.  I cited the wrong line of code with regards to retry 
count last time, what I really meant was
{code}
for(unsigned int i=0; iIncrementFailoverCount();
{code}

That's been moved to the lower block that calls ConnectAndFlush.

Also changed the rpc attempts from hardcoded 3 to the default rpc_retry count 
while I was in there.

Please let me know what you think.  Thanks for all the reviews.

> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, 
> HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch, 
> HDFS-10441.HDFS-8707.006.patch, HDFS-10441.HDFS-8707.007.patch, 
> HDFS-10441.HDFS-8707.008.patch, HDFS-10441.HDFS-8707.009.patch, 
> HDFS-10441.HDFS-8707.010.patch, HDFS-10441.HDFS-8707.011.patch, 
> HDFS-10441.HDFS-8707.012.patch, HDFS-10441.HDFS-8707.013.patch, 
> HDFS-10441.HDFS-8707.014.patch, HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10326) Disable setting tcp socket send/receive buffers for write pipelines

2016-07-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10326:
-
Attachment: HDFS-10326.001.patch

Upload the same v1 patch to trigger Jenkins.

> Disable setting tcp socket send/receive buffers for write pipelines
> ---
>
> Key: HDFS-10326
> URL: https://issues.apache.org/jira/browse/HDFS-10326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10326.000.patch, HDFS-10326.001.patch, 
> HDFS-10326.001.patch
>
>
> The DataStreamer and the Datanode use a hardcoded 
> DEFAULT_DATA_SOCKET_SIZE=128K for the send and receive buffers of a write 
> pipeline.  Explicitly setting tcp buffer sizes disables tcp stack 
> auto-tuning.  
> The hardcoded value will saturate a 1Gb with 1ms RTT.  105Mbs at 10ms.  
> Paltry 11Mbs over a 100ms long haul.  10Gb networks are underutilized.
> There should either be a configuration to completely disable setting the 
> buffers, or the the setReceiveBuffer and setSendBuffer should be removed 
> entirely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10326) Disable setting tcp socket send/receive buffers for write pipelines

2016-07-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10326:
-
Status: Open  (was: Patch Available)

> Disable setting tcp socket send/receive buffers for write pipelines
> ---
>
> Key: HDFS-10326
> URL: https://issues.apache.org/jira/browse/HDFS-10326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10326.000.patch, HDFS-10326.001.patch
>
>
> The DataStreamer and the Datanode use a hardcoded 
> DEFAULT_DATA_SOCKET_SIZE=128K for the send and receive buffers of a write 
> pipeline.  Explicitly setting tcp buffer sizes disables tcp stack 
> auto-tuning.  
> The hardcoded value will saturate a 1Gb with 1ms RTT.  105Mbs at 10ms.  
> Paltry 11Mbs over a 100ms long haul.  10Gb networks are underutilized.
> There should either be a configuration to completely disable setting the 
> buffers, or the the setReceiveBuffer and setSendBuffer should be removed 
> entirely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10326) Disable setting tcp socket send/receive buffers for write pipelines

2016-07-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10326:
-
Status: Patch Available  (was: Open)

> Disable setting tcp socket send/receive buffers for write pipelines
> ---
>
> Key: HDFS-10326
> URL: https://issues.apache.org/jira/browse/HDFS-10326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10326.000.patch, HDFS-10326.001.patch
>
>
> The DataStreamer and the Datanode use a hardcoded 
> DEFAULT_DATA_SOCKET_SIZE=128K for the send and receive buffers of a write 
> pipeline.  Explicitly setting tcp buffer sizes disables tcp stack 
> auto-tuning.  
> The hardcoded value will saturate a 1Gb with 1ms RTT.  105Mbs at 10ms.  
> Paltry 11Mbs over a 100ms long haul.  10Gb networks are underutilized.
> There should either be a configuration to completely disable setting the 
> buffers, or the the setReceiveBuffer and setSendBuffer should be removed 
> entirely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9271) Implement basic NN operations

2016-07-15 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-9271:

Attachment: HDFS-9271.HDFS-8707.004.patch

Apparently there was a bug with hdfsGetDefaultBlockSize. It is fixed now and a 
new patch is attached. Please review.

> Implement basic NN operations
> -
>
> Key: HDFS-9271
> URL: https://issues.apache.org/jira/browse/HDFS-9271
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Anatoli Shein
> Attachments: HDFS-9271.HDFS-8707.000.patch, 
> HDFS-9271.HDFS-8707.001.patch, HDFS-9271.HDFS-8707.002.patch, 
> HDFS-9271.HDFS-8707.003.patch, HDFS-9271.HDFS-8707.004.patch
>
>
> Expose via C and C++ API:
> * mkdirs
> * rename
> * delete
> * stat
> * chmod
> * chown
> * getListing
> * setOwner



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10627) Volume Scanner mark a block as "suspect" even if the block sender encounters 'Broken pipe' or 'Connection reset by peer' exception

2016-07-15 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380107#comment-15380107
 ] 

Yongjun Zhang commented on HDFS-10627:
--

Thanks guys for the work here!

When detecting corruption of block (checksum error), in pipeline write, or 
block transfer of pipeline recovery, I hope a checksum exception can be thrown 
and delivered back to sender, instead of just disconnect. Is this totally not 
feasible here?


> Volume Scanner mark a block as "suspect" even if the block sender encounters 
> 'Broken pipe' or 'Connection reset by peer' exception
> --
>
> Key: HDFS-10627
> URL: https://issues.apache.org/jira/browse/HDFS-10627
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10627.patch
>
>
> In the BlockSender code,
> {code:title=BlockSender.java|borderStyle=solid}
> if (!ioem.startsWith("Broken pipe") && !ioem.startsWith("Connection 
> reset")) {
>   LOG.error("BlockSender.sendChunks() exception: ", e);
> }
> datanode.getBlockScanner().markSuspectBlock(
>   volumeRef.getVolume().getStorageID(),
>   block);
> {code}
> Before HDFS-7686, the block was marked as suspect only if the exception 
> message doesn't start with Broken pipe or Connection reset.
> But after HDFS-7686, the block is marked as corrupt irrespective of the 
> exception message.
> In one of our datanode, it took approximately a whole day (22 hours) to go 
> through all the suspect blocks to scan one corrupt block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9770) Namenode doesn't pass config to UGI in format

2016-07-15 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9770:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Looking at this more carefully now, I think the right answer is that the caller 
should be initializing the UGI on its own before creating the MiniDFS.

> Namenode doesn't pass config to UGI in format
> -
>
> Key: HDFS-9770
> URL: https://issues.apache.org/jira/browse/HDFS-9770
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9770.001.patch
>
>
> The {{NameNode.format()}} method should call 
> {{UserGroupInformation.setConfiguration(conf)}} before using the UGI.  
> Otherwise, the config that the UGI is using is not the same as what the NN is 
> using.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10627) Volume Scanner mark a block as "suspect" even if the block sender encounters 'Broken pipe' or 'Connection reset by peer' exception

2016-07-15 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380056#comment-15380056
 ] 

Wei-Chiu Chuang commented on HDFS-10627:


Hi Daryn, I totally get that.

Out of curiosity, why isn't a packet responder instantiated for block transfer 
operations? Looking at the code, a packet responder is only instantiated for 
writing a pipeline.

I was relatively concerned about removing it, because [~yzhangal] and I have 
been diagnosing a block corruption bug very similar to HDFS-4660 and HDFS-9220, 
and a volume scanner that is called up to scan a suspect block in these cases 
is useful.

> Volume Scanner mark a block as "suspect" even if the block sender encounters 
> 'Broken pipe' or 'Connection reset by peer' exception
> --
>
> Key: HDFS-10627
> URL: https://issues.apache.org/jira/browse/HDFS-10627
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10627.patch
>
>
> In the BlockSender code,
> {code:title=BlockSender.java|borderStyle=solid}
> if (!ioem.startsWith("Broken pipe") && !ioem.startsWith("Connection 
> reset")) {
>   LOG.error("BlockSender.sendChunks() exception: ", e);
> }
> datanode.getBlockScanner().markSuspectBlock(
>   volumeRef.getVolume().getStorageID(),
>   block);
> {code}
> Before HDFS-7686, the block was marked as suspect only if the exception 
> message doesn't start with Broken pipe or Connection reset.
> But after HDFS-7686, the block is marked as corrupt irrespective of the 
> exception message.
> In one of our datanode, it took approximately a whole day (22 hours) to go 
> through all the suspect blocks to scan one corrupt block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380048#comment-15380048
 ] 

Hadoop QA commented on HDFS-10625:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
55s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m  
3s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  org.apache.hadoop.hdfs.server.datanode.BlockSender.verifyChecksum(byte[], 
int, int, int, int) might ignore 
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException  At 
BlockSender.java:might ignore 
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException  At 
BlockSender.java:[line 693] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818217/HDFS-10625-1.patch |
| JIRA Issue | HDFS-10625 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a663552a7059 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c48e9d6 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16069/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16069/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16069/console |
| Powered 

[jira] [Updated] (HDFS-10627) Volume Scanner mark a block as "suspect" even if the block sender encounters 'Broken pipe' or 'Connection reset by peer' exception

2016-07-15 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10627:
--
Status: Patch Available  (was: Open)

> Volume Scanner mark a block as "suspect" even if the block sender encounters 
> 'Broken pipe' or 'Connection reset by peer' exception
> --
>
> Key: HDFS-10627
> URL: https://issues.apache.org/jira/browse/HDFS-10627
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10627.patch
>
>
> In the BlockSender code,
> {code:title=BlockSender.java|borderStyle=solid}
> if (!ioem.startsWith("Broken pipe") && !ioem.startsWith("Connection 
> reset")) {
>   LOG.error("BlockSender.sendChunks() exception: ", e);
> }
> datanode.getBlockScanner().markSuspectBlock(
>   volumeRef.getVolume().getStorageID(),
>   block);
> {code}
> Before HDFS-7686, the block was marked as suspect only if the exception 
> message doesn't start with Broken pipe or Connection reset.
> But after HDFS-7686, the block is marked as corrupt irrespective of the 
> exception message.
> In one of our datanode, it took approximately a whole day (22 hours) to go 
> through all the suspect blocks to scan one corrupt block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10627) Volume Scanner mark a block as "suspect" even if the block sender encounters 'Broken pipe' or 'Connection reset by peer' exception

2016-07-15 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10627:
--
Attachment: HDFS-10627.patch

Brought back the old behavior so that block sender will mark a block as suspect 
only if the exception doesn't start with Broken pipe and Connection reset

> Volume Scanner mark a block as "suspect" even if the block sender encounters 
> 'Broken pipe' or 'Connection reset by peer' exception
> --
>
> Key: HDFS-10627
> URL: https://issues.apache.org/jira/browse/HDFS-10627
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-10627.patch
>
>
> In the BlockSender code,
> {code:title=BlockSender.java|borderStyle=solid}
> if (!ioem.startsWith("Broken pipe") && !ioem.startsWith("Connection 
> reset")) {
>   LOG.error("BlockSender.sendChunks() exception: ", e);
> }
> datanode.getBlockScanner().markSuspectBlock(
>   volumeRef.getVolume().getStorageID(),
>   block);
> {code}
> Before HDFS-7686, the block was marked as suspect only if the exception 
> message doesn't start with Broken pipe or Connection reset.
> But after HDFS-7686, the block is marked as corrupt irrespective of the 
> exception message.
> In one of our datanode, it took approximately a whole day (22 hours) to go 
> through all the suspect blocks to scan one corrupt block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-15 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10625:
--
Attachment: HDFS-10625-1.patch

Thanks [~yzhangal] for the review.
Added a new patch to address your comments.

I ran {{TestBlockScanner#testCorruptBlockHandling}} to confirm the exception 
message:
{noformat}
2016-07-15 13:50:35,445 
[VolumeScannerThread(/Users/rushabhs/hadoop/Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1)]
 INFO  datanode.TestBlockScanner (TestBlockScanner.java:handle(325)) - handling 
block BP-449958863-x-1468608632636:blk_1073741828_1004 (exception 
org.apache.hadoop.fs.ChecksumException: Checksum failed at 0 for replica: 
FinalizedReplica, blk_1073741828_1004, FINALIZED
  getNumBytes() = 4
  getBytesOnDisk()  = 4
  getVisibleLength()= 4
  getVolume()   = 
/Users/rushabhs/hadoop/Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
  getBlockFile()= 
/Users/rushabhs/hadoop/Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-449958863-x-1468608632636/current/finalized/subdir0/subdir0/blk_1073741828)
{noformat}

Please review the revised patch.

>  VolumeScanner to report why a block is found bad
> -
>
> Key: HDFS-10625
> URL: https://issues.apache.org/jira/browse/HDFS-10625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Rushabh S Shah
>  Labels: supportability
> Attachments: HDFS-10625-1.patch, HDFS-10625.patch
>
>
> VolumeScanner may report:
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> blk_1170125248_96458336 on /d/dfs/dn
> {code}
> It would be helpful to report the reason why the block is bad, especially 
> when the block is corrupt, where is the first corrupted chunk in the block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-15 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10625:
--
Status: Patch Available  (was: Open)

>  VolumeScanner to report why a block is found bad
> -
>
> Key: HDFS-10625
> URL: https://issues.apache.org/jira/browse/HDFS-10625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Rushabh S Shah
>  Labels: supportability
> Attachments: HDFS-10625-1.patch, HDFS-10625.patch
>
>
> VolumeScanner may report:
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> blk_1170125248_96458336 on /d/dfs/dn
> {code}
> It would be helpful to report the reason why the block is bad, especially 
> when the block is corrupt, where is the first corrupted chunk in the block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-15 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10625:
--
Status: Open  (was: Patch Available)

>  VolumeScanner to report why a block is found bad
> -
>
> Key: HDFS-10625
> URL: https://issues.apache.org/jira/browse/HDFS-10625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Rushabh S Shah
>  Labels: supportability
> Attachments: HDFS-10625.patch
>
>
> VolumeScanner may report:
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> blk_1170125248_96458336 on /d/dfs/dn
> {code}
> It would be helpful to report the reason why the block is bad, especially 
> when the block is corrupt, where is the first corrupted chunk in the block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10628) Log HDFS Balancer exit message to its own log

2016-07-15 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10628:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to branch-2.8 and above. Thanks [~clouderajiayi] for the 
contribution and thanks [~andrew.wang] for the review.

> Log HDFS Balancer exit message to its own log
> -
>
> Key: HDFS-10628
> URL: https://issues.apache.org/jira/browse/HDFS-10628
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10628.001.patch, HDFS-10628.002.patch
>
>
> Currently, the exit message is logged to stderr. It would be more convenient 
> if we also log this to Balancer log when people want to figure out why 
> Balancer is aborted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10441) libhdfs++: HA namenode support

2016-07-15 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379889#comment-15379889
 ] 

James Clampffer commented on HDFS-10441:


bq.  In RpcConnectionImpl::OnRecvCompleted, if we detect that we've 
connected to the standby, it falls through to StartReading(). Should it bail 
out at that point?
I tried this, but if we bail out here we need something else to get the rpc 
loop to start running again.  It seemed like this was a relatively simple way 
of solving that rather than adding a special case.  I could be missing 
somethere here, I had a sketch of the Rpc code modeled as a state machine but 
it's possible that it is out of date now.

In RpcEngine::RpcCommsError, we call pendingRequests [ i 
]->IncrementFailoverCount(); should that implicitly reset the retry count to 0? 
Will we get into cases where it retries until it fails, then the retry count is 
already == max_retry?
Nice catch.  I had the failover case covered:

{code}
head_action = RetryAction::failover(std::max(0,options_.rpc_retry_delay_ms));
{code}
I'll move that into IncrementRetryCount to cover all cases.

bq. If a namenode is down when we try to resolve, we don't try again when it's 
time to fail over, do we? We should capture that in another bug
We do at the bottom of HANamenodeTracker::GetFailoverAndUpdate when we call 
ResolveInPlace.  The idea is that the endpoint vector will be empty either 
because it's unset or explicitly cleared when resolution fails so we can just 
do "if empty ResolveInPlace".

 future discussion
bq. In FixedDealyWithFailover::ShouldRetry(), should we failover on any other 
errors other than timeout? Bad route to host? DNS failure?
I have to check the java code to be 100% sure.  Based on the user configuration 
options it looked like timeout was the main one that needed to be accounted 
for.  Bad route to host should probably fall under that rule as well since it 
doesn't seem like it can be recovered from.  With a DNS failure we might be out 
of luck in general, but it might be worth propagating back to the user.  This 
is just the quick failover path, everything will end up failing over but will 
retry first.

bq. In FixedDealyWithFailover::ShouldRetry(), we're always using a delay if 
retries < 3. This should be configurable. We can cover that in another bug
Oh, I'm not sure how I missed that.  Should be using 
FixedDelayWithFailover::max_retry_.  That said there's still work to be done on 
the delay logic to get parity with the Java client notably things like 
exponential backoff.  I left that out until good tests are added because it 
started adding more corner cases that I had to test manually and because the 
simple workaround is to bump up max failovers.





> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, 
> HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch, 
> HDFS-10441.HDFS-8707.006.patch, HDFS-10441.HDFS-8707.007.patch, 
> HDFS-10441.HDFS-8707.008.patch, HDFS-10441.HDFS-8707.009.patch, 
> HDFS-10441.HDFS-8707.010.patch, HDFS-10441.HDFS-8707.011.patch, 
> HDFS-10441.HDFS-8707.012.patch, HDFS-10441.HDFS-8707.013.patch, 
> HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10629) Federation Router

2016-07-15 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379873#comment-15379873
 ] 

Inigo Goiri commented on HDFS-10629:


I'll fix some of the errors raised by Jenkins but I think the patch is ready 
for review. [~jingzhao], [~jira.shegalov], [~mingma], [~subru], I think you 
guys would be good candidates to review. Is anybody available? Anybody else?

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629.000.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10587) Incorrect offset/length calculation in pipeline recovery causes block corruption

2016-07-15 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379864#comment-15379864
 ] 

Yongjun Zhang commented on HDFS-10587:
--

Thank you so much [~vinayrpet]! We are looking further!




> Incorrect offset/length calculation in pipeline recovery causes block 
> corruption
> 
>
> Key: HDFS-10587
> URL: https://issues.apache.org/jira/browse/HDFS-10587
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10587-test.patch, HDFS-10587.001.patch
>
>
> We found incorrect offset and length calculation in pipeline recovery may 
> cause block corruption and results in missing blocks under a very unfortunate 
> scenario. 
> (1) A client established pipeline and started writing data to the pipeline.
> (2) One of the data node in the pipeline restarted, closing the socket, and 
> some written data were unacknowledged.
> (3) Client replaced the failed data node with a new one, initiating block 
> transfer to copy existing data in the block to the new datanode.
> (4) The block is transferred to the new node. Crucially, the entire block, 
> including the unacknowledged data, was transferred.
> (5) The last chunk (512 bytes) was not a full chunk, but the destination 
> still reserved the whole chunk in its buffer, and wrote the entire buffer to 
> disk, therefore some written data is garbage.
> (6) When the transfer was done, the destination data node converted the 
> replica from temporary to rbw, which made its visible length as the length of 
> bytes on disk. That is to say, it thought whatever was transferred was 
> acknowledged. However, the visible length of the replica is different (round 
> up to the next multiple of 512) than the source of transfer. [1]
> (7) Client then truncated the block in the attempt to remove unacknowledged 
> data. However, because the visible length is equivalent of the bytes on disk, 
> it did not truncate unacknowledged data.
> (8) When new data was appended to the destination, it skipped the bytes 
> already on disk. Therefore, whatever was written as garbage was not replaced.
> (9) the volume scanner detected corrupt replica, but due to HDFS-10512, it 
> wouldn’t tell NameNode to mark the replica as corrupt, so the client 
> continued to form a pipeline using the corrupt replica.
> (10) Finally the DN that had the only healthy replica was restarted. NameNode 
> then update the pipeline to only contain the corrupt replica.
> (11) Client continue to write to the corrupt replica, because neither client 
> nor the data node itself knows the replica is corrupt. When the restarted 
> datanodes comes back, their replica are stale, despite they are not corrupt. 
> Therefore, none of the replica is good and up to date.
> The sequence of events was reconstructed based on DataNode/NameNode log and 
> my understanding of code.
> Incidentally, we have observed the same sequence of events on two independent 
> clusters.
> [1]
> The sender has the replica as follows:
> 2016-04-15 22:03:05,066 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> Recovering ReplicaBeingWritten, blk_1556997324_1100153495099, RBW
>   getNumBytes() = 41381376
>   getBytesOnDisk()  = 41381376
>   getVisibleLength()= 41186444
>   getVolume()   = /hadoop-i/data/current
>   getBlockFile()= 
> /hadoop-i/data/current/BP-1043567091-10.1.1.1-1343682168507/current/rbw/blk_1556997324
>   bytesAcked=41186444
>   bytesOnDisk=41381376
> while the receiver has the replica as follows:
> 2016-04-15 22:03:05,068 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> Recovering ReplicaBeingWritten, blk_1556997324_1100153495099, RBW
>   getNumBytes() = 41186816
>   getBytesOnDisk()  = 41186816
>   getVisibleLength()= 41186816
>   getVolume()   = /hadoop-g/data/current
>   getBlockFile()= 
> /hadoop-g/data/current/BP-1043567091-10.1.1.1-1343682168507/current/rbw/blk_1556997324
>   bytesAcked=41186816
>   bytesOnDisk=41186816



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10600) PlanCommand#getThrsholdPercentage should not use throughput value.

2016-07-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379803#comment-15379803
 ] 

Lei (Eddy) Xu commented on HDFS-10600:
--

Thanks. I will review it shortly.

> PlanCommand#getThrsholdPercentage should not use throughput value.
> --
>
> Key: HDFS-10600
> URL: https://issues.apache.org/jira/browse/HDFS-10600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10600.001.patch, HDFS-10600.002.patch
>
>
> In {{PlanCommand#getThresholdPercentage}}
> {code}
>  private double getThresholdPercentage(CommandLine cmd) {
> 
> if ((value <= 0.0) || (value > 100.0)) {
>   value = getConf().getDouble(
>   DFSConfigKeys.DFS_DISK_BALANCER_MAX_DISK_THRUPUT,
>   DFSConfigKeys.DFS_DISK_BALANCER_MAX_DISK_THRUPUT_DEFAULT);
> }
> return value;
>   }
> {code}
> {{DISK_THROUGHPUT}} has the unit of "MB", so it does not make sense to return 
> {{throughput}} as a percentage value.
> Btw, we should use {{THROUGHPUT}} instead of {{THRUPUT}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10614) Appended blocks can be closed even before IBRs from DataNodes

2016-07-15 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379800#comment-15379800
 ] 

Jing Zhao commented on HDFS-10614:
--

So is it possible that we have the following scenario: 
1. NN receives block_received msg from DN1 with GS 1001. Because the client has 
not sent the commit block request yet, the block is not COMPLETE after this msg.
2. Somehow an old DN1's block_receiving msg with blk GS 1000 was sent to NN 
after #1 (I do not think currently this can happen, but let's say a new bug in 
DN causes this)
3. NN finds that the block_receiving msg reports a replica which is not in 
FINALIZED state, thus removes the block from the storage (and because the 
blockInfo's is not in COMPLETE state thus {{checkReplicaCorrupt}} passes).

The above #2 may never happen, but currently my feeling is it can be safer to 
add the GS check on the NN side. What do you think, [~vinayrpet]?

> Appended blocks can be closed even before IBRs from DataNodes
> -
>
> Key: HDFS-10614
> URL: https://issues.apache.org/jira/browse/HDFS-10614
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-10614.01.patch, HDFS-10614.02.patch
>
>
> Scenario:
>1. Open the file for append()
>2. Trigger append pipeline setup by adding some data.
>3. Consider RECEIVING IBRs of DNs reaches NN first.
>4. updatePipeline() rpc sent to namenode to update the pipeline.
>5. Now, if complete() is called on the file even before closing the 
> pipeline, then block will be COMPLETE, even before block is actually 
> FINALIZED at DN side and file will be closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10629) Federation Router

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379764#comment-15379764
 ] 

Hadoop QA commented on HDFS-10629:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 5 new + 36 unchanged - 
0 fixed = 41 total (was 36) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 216 new + 441 unchanged - 0 fixed = 657 total (was 441) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 4 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 18 new + 0 
unchanged - 0 fixed = 18 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 10 new + 7 
unchanged - 0 fixed = 17 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 30] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 43] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 34] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 33] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 39] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 40] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 38] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 45] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 46] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 44] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 35] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 50] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 47] |
|  |  Unread public/protected field:At NamenodeStatusReport.java:[line 29] |
|  |  Synchronization performed on java.util.concurrent.CopyOnWriteArrayList in 
org.apache.had

[jira] [Created] (HDFS-10635) expected/actual parameters inverted in TestGlobPaths assertEquals

2016-07-15 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-10635:
-

 Summary: expected/actual parameters inverted in TestGlobPaths 
assertEquals
 Key: HDFS-10635
 URL: https://issues.apache.org/jira/browse/HDFS-10635
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


Pretty much all the assertEquals clauses in {{TestGlobPaths}} place the actual 
value first, expected second. That's the wrong order and will lead to 
misleading messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2016-07-15 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10629:
---
Assignee: Jason Kace
  Status: Patch Available  (was: Open)

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629.000.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2016-07-15 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10629:
---
Attachment: HDFS-10629.000.patch

First version of the Router (temporarily uploading for [~jakace]).

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
> Attachments: HDFS-10629.000.patch
>
>
> Component that routes calls from the clients to the right Namespace. It 
> implements {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10441) libhdfs++: HA namenode support

2016-07-15 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379522#comment-15379522
 ] 

Bob Hansen commented on HDFS-10441:
---

Just a _few_ more questions:
* In RpcConnectionImpl::OnRecvCompleted, if we detect that we've 
connected to the standby, it falls through to StartReading().  Should it bail 
out at that point?
* In RpcEngine::RpcCommsError, we call 
pendingRequests[i]->IncrementFailoverCount();  should that implicitly reset the 
retry count to 0?  Will we get into cases where it retries until it fails, then 
the retry count is already == max_retry?
* If a namenode is down when we try to resolve, we don't try again when it's 
time to fail over, do we?  We should capture that in another bug

For discussion, not necessarily to fix in this patch:
* In FixedDealyWithFailover::ShouldRetry(), should we failover on any other 
errors other than timeout?  Bad route to host?  DNS failure?
* In FixedDealyWithFailover::ShouldRetry(), we're always using a delay if 
retries < 3.  This should be configurable.  We can cover that in another bug





> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, 
> HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch, 
> HDFS-10441.HDFS-8707.006.patch, HDFS-10441.HDFS-8707.007.patch, 
> HDFS-10441.HDFS-8707.008.patch, HDFS-10441.HDFS-8707.009.patch, 
> HDFS-10441.HDFS-8707.010.patch, HDFS-10441.HDFS-8707.011.patch, 
> HDFS-10441.HDFS-8707.012.patch, HDFS-10441.HDFS-8707.013.patch, 
> HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10634) libhdfs++: Improve parsing of config file entries

2016-07-15 Thread James Clampffer (JIRA)
James Clampffer created HDFS-10634:
--

 Summary: libhdfs++: Improve parsing of config file entries
 Key: HDFS-10634
 URL: https://issues.apache.org/jira/browse/HDFS-10634
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


Config files just specify an authority rather than a real URI for namenodes, 
but we've been using the URI class to parse them.  This is kind of hacky 
because a scheme needs to be prepended (and then ignored) for the library to 
work.

The URI parsing library generates errors in valgrind when it doesn't get a 
scheme which could be concerning (conditional jump on undefined).  At the 
moment it's unclear if this is a real issue or it's just using vectorized 
string operations that read whole words but the string ends in the middle of a 
machine word.

This is also a good place to refactor the split function in uri.h to be general 
purpose, right now it has a special rule to disregard leading '/' chars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10614) Appended blocks can be closed even before IBRs from DataNodes

2016-07-15 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379230#comment-15379230
 ] 

Vinayakumar B commented on HDFS-10614:
--

bq. whether we can also add an extra check to make sure the reported block's GS 
is greater than the stored block. In this way the logic will be the same with 
setGenerationStampAndVerifyReplicas in updatePipeline.
IMO Since the block reported in the storage is not in the FINALIZED state, 
irrespective of the genstamp, block should be removed from that {{storage}}. 
Anyway it would be added as expected location. Once the block is complete and 
reports FINALIZED state, it would be added back.


> Appended blocks can be closed even before IBRs from DataNodes
> -
>
> Key: HDFS-10614
> URL: https://issues.apache.org/jira/browse/HDFS-10614
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-10614.01.patch, HDFS-10614.02.patch
>
>
> Scenario:
>1. Open the file for append()
>2. Trigger append pipeline setup by adding some data.
>3. Consider RECEIVING IBRs of DNs reaches NN first.
>4. updatePipeline() rpc sent to namenode to update the pipeline.
>5. Now, if complete() is called on the file even before closing the 
> pipeline, then block will be COMPLETE, even before block is actually 
> FINALIZED at DN side and file will be closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10587) Incorrect offset/length calculation in pipeline recovery causes block corruption

2016-07-15 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10587:
-
Attachment: HDFS-10587-test.patch

I have written some test to reproduce the flow as mentioned in the description. 
But could not find the corruption when ran.
Check whether this could help any of you.

> Incorrect offset/length calculation in pipeline recovery causes block 
> corruption
> 
>
> Key: HDFS-10587
> URL: https://issues.apache.org/jira/browse/HDFS-10587
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10587-test.patch, HDFS-10587.001.patch
>
>
> We found incorrect offset and length calculation in pipeline recovery may 
> cause block corruption and results in missing blocks under a very unfortunate 
> scenario. 
> (1) A client established pipeline and started writing data to the pipeline.
> (2) One of the data node in the pipeline restarted, closing the socket, and 
> some written data were unacknowledged.
> (3) Client replaced the failed data node with a new one, initiating block 
> transfer to copy existing data in the block to the new datanode.
> (4) The block is transferred to the new node. Crucially, the entire block, 
> including the unacknowledged data, was transferred.
> (5) The last chunk (512 bytes) was not a full chunk, but the destination 
> still reserved the whole chunk in its buffer, and wrote the entire buffer to 
> disk, therefore some written data is garbage.
> (6) When the transfer was done, the destination data node converted the 
> replica from temporary to rbw, which made its visible length as the length of 
> bytes on disk. That is to say, it thought whatever was transferred was 
> acknowledged. However, the visible length of the replica is different (round 
> up to the next multiple of 512) than the source of transfer. [1]
> (7) Client then truncated the block in the attempt to remove unacknowledged 
> data. However, because the visible length is equivalent of the bytes on disk, 
> it did not truncate unacknowledged data.
> (8) When new data was appended to the destination, it skipped the bytes 
> already on disk. Therefore, whatever was written as garbage was not replaced.
> (9) the volume scanner detected corrupt replica, but due to HDFS-10512, it 
> wouldn’t tell NameNode to mark the replica as corrupt, so the client 
> continued to form a pipeline using the corrupt replica.
> (10) Finally the DN that had the only healthy replica was restarted. NameNode 
> then update the pipeline to only contain the corrupt replica.
> (11) Client continue to write to the corrupt replica, because neither client 
> nor the data node itself knows the replica is corrupt. When the restarted 
> datanodes comes back, their replica are stale, despite they are not corrupt. 
> Therefore, none of the replica is good and up to date.
> The sequence of events was reconstructed based on DataNode/NameNode log and 
> my understanding of code.
> Incidentally, we have observed the same sequence of events on two independent 
> clusters.
> [1]
> The sender has the replica as follows:
> 2016-04-15 22:03:05,066 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> Recovering ReplicaBeingWritten, blk_1556997324_1100153495099, RBW
>   getNumBytes() = 41381376
>   getBytesOnDisk()  = 41381376
>   getVisibleLength()= 41186444
>   getVolume()   = /hadoop-i/data/current
>   getBlockFile()= 
> /hadoop-i/data/current/BP-1043567091-10.1.1.1-1343682168507/current/rbw/blk_1556997324
>   bytesAcked=41186444
>   bytesOnDisk=41381376
> while the receiver has the replica as follows:
> 2016-04-15 22:03:05,068 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> Recovering ReplicaBeingWritten, blk_1556997324_1100153495099, RBW
>   getNumBytes() = 41186816
>   getBytesOnDisk()  = 41186816
>   getVisibleLength()= 41186816
>   getVolume()   = /hadoop-g/data/current
>   getBlockFile()= 
> /hadoop-g/data/current/BP-1043567091-10.1.1.1-1343682168507/current/rbw/blk_1556997324
>   bytesAcked=41186816
>   bytesOnDisk=41186816



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379138#comment-15379138
 ] 

Hadoop QA commented on HDFS-10633:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818130/HDFS-10633.001.patch |
| JIRA Issue | HDFS-10633 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux f899aa75fa09 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b5ee7db |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16067/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10633:
-
Affects Version/s: 2.9.0

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.9.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10633:
-
Fix Version/s: (was: 2.9.0)

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10633.001.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10600) PlanCommand#getThrsholdPercentage should not use throughput value.

2016-07-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379118#comment-15379118
 ] 

Yiqun Lin commented on HDFS-10600:
--

Hi, [~eddyxu], it seems that we didn't update the new setting 
{{dfs.disk.balancer.plan.threshold.percent}} in {{HDFSDiskbalancer.md}}. I have 
filed a new jira for fix this, HDFS-10633.

> PlanCommand#getThrsholdPercentage should not use throughput value.
> --
>
> Key: HDFS-10600
> URL: https://issues.apache.org/jira/browse/HDFS-10600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10600.001.patch, HDFS-10600.002.patch
>
>
> In {{PlanCommand#getThresholdPercentage}}
> {code}
>  private double getThresholdPercentage(CommandLine cmd) {
> 
> if ((value <= 0.0) || (value > 100.0)) {
>   value = getConf().getDouble(
>   DFSConfigKeys.DFS_DISK_BALANCER_MAX_DISK_THRUPUT,
>   DFSConfigKeys.DFS_DISK_BALANCER_MAX_DISK_THRUPUT_DEFAULT);
> }
> return value;
>   }
> {code}
> {{DISK_THROUGHPUT}} has the unit of "MB", so it does not make sense to return 
> {{throughput}} as a percentage value.
> Btw, we should use {{THROUGHPUT}} instead of {{THRUPUT}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10633:
-
Status: Patch Available  (was: Open)

Attach a simple patch for the fix.

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HDFS-10633.001.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10633:
-
Attachment: HDFS-10633.001.patch

> DiskBalancer : Add the description for the new setting 
> dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
> --
>
> Key: HDFS-10633
> URL: https://issues.apache.org/jira/browse/HDFS-10633
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HDFS-10633.001.patch
>
>
> After HDFS-10600, it introduced a new setting 
> {{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
> controls if we need to do any balancing on the volume set. But now this new 
> setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10633) DiskBalancer : Add the description for the new setting dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md

2016-07-15 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-10633:


 Summary: DiskBalancer : Add the description for the new setting 
dfs.disk.balancer.plan.threshold.percent in HDFSDiskbalancer.md
 Key: HDFS-10633
 URL: https://issues.apache.org/jira/browse/HDFS-10633
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Reporter: Yiqun Lin
Assignee: Yiqun Lin
Priority: Minor


After HDFS-10600, it introduced a new setting 
{{dfs.disk.balancer.plan.threshold.percent}} in diskbalancer. This setting 
controls if we need to do any balancing on the volume set. But now this new 
setting was not updated in {{HDFSDiskbalancer.md}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org