[jira] [Commented] (HDFS-7343) A comprehensive and flexible storage policy engine

2016-07-20 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385475#comment-15385475
 ] 

Kai Zheng commented on HDFS-7343:
-

Hi [~yuanbo],

Sorry for the late reply and thanks for your interest! We'll resume this effort 
since the HDFS erasure coding feature is going to be done. Currently we're 
working on the design and hopefully we'll post a design draft in 1/2 weeks for 
the review and discussion.

> A comprehensive and flexible storage policy engine
> --
>
> Key: HDFS-7343
> URL: https://issues.apache.org/jira/browse/HDFS-7343
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> As discussed in HDFS-7285, it would be better to have a comprehensive and 
> flexible storage policy engine considering file attributes, metadata, data 
> temperature, storage type, EC codec, available hardware capabilities, 
> user/application preference and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-20 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10645:
--
Status: Patch Available  (was: In Progress)

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-10645.001.patch, HDFS-10645.002.patch, 
> HDFS-10645.003.patch, Selection_047.png, Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-20 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10645:
--
Status: In Progress  (was: Patch Available)

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-10645.001.patch, HDFS-10645.002.patch, 
> HDFS-10645.003.patch, Selection_047.png, Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10657) testAclCLI.xml inherit default ACL to dir test should expect mask r-x

2016-07-20 Thread John Zhuge (JIRA)
John Zhuge created HDFS-10657:
-

 Summary: testAclCLI.xml inherit default ACL to dir test should 
expect mask r-x
 Key: HDFS-10657
 URL: https://issues.apache.org/jira/browse/HDFS-10657
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


The following test case should expect {{mask::r-x}} ACL entry instead of 
{{mask::rwx}}:
{code:xml}
  setfacl : check inherit default ACL to dir
  
-fs NAMENODE -mkdir /dir1
-fs NAMENODE -setfacl -m 
default:user:charlie:r-x,default:group:admin:rwx /dir1
-fs NAMENODE -mkdir /dir1/dir2
-fs NAMENODE -getfacl /dir1/dir2
...

  SubstringComparator
  mask::rwx

{code}

But why does it pass? Because the comparator type is {{SubstringComparator}} 
and it matches the wrong line {{default:mask::rwx}} in the output of 
{{getfacl}}:
{noformat}
# file: /dir1/dir2
# owner: jzhuge
# group: supergroup
user::rwx
user:charlie:r-x
group::r-x
group:admin:rwx #effective:r-x
mask::r-x
other::r-x
default:user::rwx
default:user:charlie:r-x
default:group::r-x
default:group:admin:rwx
default:mask::rwx
default:other::r-x
{noformat}

The comparator should match the entire line instead of just substring. Other 
comparators in {{testAclCLI.xml}} have the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-10651) Clean up some configuration related codes about legacy block reader

2016-07-20 Thread Youwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-10651 stopped by Youwei Wang.
--
> Clean up some configuration related codes about legacy block reader
> ---
>
> Key: HDFS-10651
> URL: https://issues.apache.org/jira/browse/HDFS-10651
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Zheng
>Assignee: Youwei Wang
>Priority: Minor
> Attachments: HDFS-10651.v1.patch, HDFS-10651.v2.patch, 
> HDFS-10651.v3.patch
>
>
> HDFS-10548 removed the legacy block reader. This is to clean up the 
> configuration related codes accordingly as [~andrew.wang] suggested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-10651) Clean up some configuration related codes about legacy block reader

2016-07-20 Thread Youwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-10651 started by Youwei Wang.
--
> Clean up some configuration related codes about legacy block reader
> ---
>
> Key: HDFS-10651
> URL: https://issues.apache.org/jira/browse/HDFS-10651
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Zheng
>Assignee: Youwei Wang
>Priority: Minor
> Attachments: HDFS-10651.v1.patch, HDFS-10651.v2.patch, 
> HDFS-10651.v3.patch
>
>
> HDFS-10548 removed the legacy block reader. This is to clean up the 
> configuration related codes accordingly as [~andrew.wang] suggested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10657) testAclCLI.xml inherit default ACL to dir test should expect mask r-x

2016-07-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385536#comment-15385536
 ] 

John Zhuge commented on HDFS-10657:
---

Change the type to {{RegexpComparator}} and set {{expected-output}} to 
{{^mask::r-x$}}.

Is it worthwhile to create a new type {{ExactLineComparator}}? With the new 
type, {{expected-output}} can be {{mask::r-x}}

> testAclCLI.xml inherit default ACL to dir test should expect mask r-x
> -
>
> Key: HDFS-10657
> URL: https://issues.apache.org/jira/browse/HDFS-10657
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> The following test case should expect {{mask::r-x}} ACL entry instead of 
> {{mask::rwx}}:
> {code:xml}
>   setfacl : check inherit default ACL to dir
>   
> -fs NAMENODE -mkdir /dir1
> -fs NAMENODE -setfacl -m 
> default:user:charlie:r-x,default:group:admin:rwx /dir1
> -fs NAMENODE -mkdir /dir1/dir2
> -fs NAMENODE -getfacl /dir1/dir2
> ...
> 
>   SubstringComparator
>   mask::rwx
> 
> {code}
> But why does it pass? Because the comparator type is {{SubstringComparator}} 
> and it matches the wrong line {{default:mask::rwx}} in the output of 
> {{getfacl}}:
> {noformat}
> # file: /dir1/dir2
> # owner: jzhuge
> # group: supergroup
> user::rwx
> user:charlie:r-x
> group::r-x
> group:admin:rwx   #effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:charlie:r-x
> default:group::r-x
> default:group:admin:rwx
> default:mask::rwx
> default:other::r-x
> {noformat}
> The comparator should match the entire line instead of just substring. Other 
> comparators in {{testAclCLI.xml}} have the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-10651) Clean up some configuration related codes about legacy block reader

2016-07-20 Thread Youwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-10651 stopped by Youwei Wang.
--
> Clean up some configuration related codes about legacy block reader
> ---
>
> Key: HDFS-10651
> URL: https://issues.apache.org/jira/browse/HDFS-10651
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Zheng
>Assignee: Youwei Wang
>Priority: Minor
> Attachments: HDFS-10651.v1.patch, HDFS-10651.v2.patch, 
> HDFS-10651.v3.patch
>
>
> HDFS-10548 removed the legacy block reader. This is to clean up the 
> configuration related codes accordingly as [~andrew.wang] suggested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10651) Clean up some configuration related codes about legacy block reader

2016-07-20 Thread Youwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Youwei Wang updated HDFS-10651:
---
Attachment: HDFS-10651.v4.patch

> Clean up some configuration related codes about legacy block reader
> ---
>
> Key: HDFS-10651
> URL: https://issues.apache.org/jira/browse/HDFS-10651
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Zheng
>Assignee: Youwei Wang
>Priority: Minor
> Attachments: HDFS-10651.v1.patch, HDFS-10651.v2.patch, 
> HDFS-10651.v3.patch, HDFS-10651.v4.patch
>
>
> HDFS-10548 removed the legacy block reader. This is to clean up the 
> configuration related codes accordingly as [~andrew.wang] suggested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10651) Clean up some configuration related codes about legacy block reader

2016-07-20 Thread Youwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Youwei Wang updated HDFS-10651:
---
Attachment: (was: HDFS-10651.v4.patch)

> Clean up some configuration related codes about legacy block reader
> ---
>
> Key: HDFS-10651
> URL: https://issues.apache.org/jira/browse/HDFS-10651
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Zheng
>Assignee: Youwei Wang
>Priority: Minor
> Attachments: HDFS-10651.v1.patch, HDFS-10651.v2.patch, 
> HDFS-10651.v3.patch
>
>
> HDFS-10548 removed the legacy block reader. This is to clean up the 
> configuration related codes accordingly as [~andrew.wang] suggested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-10651) Clean up some configuration related codes about legacy block reader

2016-07-20 Thread Youwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-10651 started by Youwei Wang.
--
> Clean up some configuration related codes about legacy block reader
> ---
>
> Key: HDFS-10651
> URL: https://issues.apache.org/jira/browse/HDFS-10651
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Zheng
>Assignee: Youwei Wang
>Priority: Minor
> Attachments: HDFS-10651.v1.patch, HDFS-10651.v2.patch, 
> HDFS-10651.v3.patch
>
>
> HDFS-10548 removed the legacy block reader. This is to clean up the 
> configuration related codes accordingly as [~andrew.wang] suggested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10651) Clean up some configuration related codes about legacy block reader

2016-07-20 Thread Youwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Youwei Wang updated HDFS-10651:
---
Status: Patch Available  (was: In Progress)

> Clean up some configuration related codes about legacy block reader
> ---
>
> Key: HDFS-10651
> URL: https://issues.apache.org/jira/browse/HDFS-10651
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Zheng
>Assignee: Youwei Wang
>Priority: Minor
> Attachments: HDFS-10651.v1.patch, HDFS-10651.v2.patch, 
> HDFS-10651.v3.patch
>
>
> HDFS-10548 removed the legacy block reader. This is to clean up the 
> configuration related codes accordingly as [~andrew.wang] suggested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-20 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10652:
-
Attachment: HDFS-10652-002.patch

Updated the patch.
Please review

> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Vinayakumar B
> Attachments: HDFS-10652-002.patch, HDFS-10652.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10657) testAclCLI.xml inherit default ACL to dir test should expect mask r-x

2016-07-20 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385613#comment-15385613
 ] 

Vinayakumar B commented on HDFS-10657:
--

Thanks [~zcfire] for the catch.

bq. Is it worthwhile to create a new type ExactLineComparator? With the new 
type, expected-output can be mask::r-x
I think that sounds cool.

> testAclCLI.xml inherit default ACL to dir test should expect mask r-x
> -
>
> Key: HDFS-10657
> URL: https://issues.apache.org/jira/browse/HDFS-10657
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> The following test case should expect {{mask::r-x}} ACL entry instead of 
> {{mask::rwx}}:
> {code:xml}
>   setfacl : check inherit default ACL to dir
>   
> -fs NAMENODE -mkdir /dir1
> -fs NAMENODE -setfacl -m 
> default:user:charlie:r-x,default:group:admin:rwx /dir1
> -fs NAMENODE -mkdir /dir1/dir2
> -fs NAMENODE -getfacl /dir1/dir2
> ...
> 
>   SubstringComparator
>   mask::rwx
> 
> {code}
> But why does it pass? Because the comparator type is {{SubstringComparator}} 
> and it matches the wrong line {{default:mask::rwx}} in the output of 
> {{getfacl}}:
> {noformat}
> # file: /dir1/dir2
> # owner: jzhuge
> # group: supergroup
> user::rwx
> user:charlie:r-x
> group::r-x
> group:admin:rwx   #effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:charlie:r-x
> default:group::r-x
> default:group:admin:rwx
> default:mask::rwx
> default:other::r-x
> {noformat}
> The comparator should match the entire line instead of just substring. Other 
> comparators in {{testAclCLI.xml}} have the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385631#comment-15385631
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 368 unchanged - 12 fixed = 368 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818943/HDFS-10301.011.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0cd8f805076b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fbe6ec |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16094/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16094/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16094/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> 

[jira] [Created] (HDFS-10658) Reduce JsonFactory instance allocation in StartupProgressServlet

2016-07-20 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-10658:


 Summary: Reduce JsonFactory instance allocation in 
StartupProgressServlet
 Key: HDFS-10658
 URL: https://issues.apache.org/jira/browse/HDFS-10658
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yiqun Lin
Assignee: Yiqun Lin


Now in class {{StartupProgressServlet}}, it will always create a new 
{{JsonFactory}} instance to create a JsonGenerator. The codes:
{code}
  protected void doGet(HttpServletRequest req, HttpServletResponse resp)
  throws IOException {
resp.setContentType("application/json; charset=UTF-8");
StartupProgress prog = NameNodeHttpServer.getStartupProgressFromContext(
  getServletContext());
StartupProgressView view = prog.createView();
JsonGenerator json = new 
JsonFactory().createJsonGenerator(resp.getWriter());
try {
  json.writeStartObject();
  json.writeNumberField(ELAPSED_TIME, view.getElapsedTime());
  json.writeNumberField(PERCENT_COMPLETE, view.getPercentComplete());
  json.writeArrayFieldStart(PHASES);
  ...
{code}
We can reuse the instance and reduce {{JsonFactory instance}} allocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10658) Reduce JsonFactory instance allocation in StartupProgressServlet

2016-07-20 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10658:
-
Status: Patch Available  (was: Open)

Attach a simple patch.

> Reduce JsonFactory instance allocation in StartupProgressServlet
> 
>
> Key: HDFS-10658
> URL: https://issues.apache.org/jira/browse/HDFS-10658
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> Now in class {{StartupProgressServlet}}, it will always create a new 
> {{JsonFactory}} instance to create a JsonGenerator. The codes:
> {code}
>   protected void doGet(HttpServletRequest req, HttpServletResponse resp)
>   throws IOException {
> resp.setContentType("application/json; charset=UTF-8");
> StartupProgress prog = NameNodeHttpServer.getStartupProgressFromContext(
>   getServletContext());
> StartupProgressView view = prog.createView();
> JsonGenerator json = new 
> JsonFactory().createJsonGenerator(resp.getWriter());
> try {
>   json.writeStartObject();
>   json.writeNumberField(ELAPSED_TIME, view.getElapsedTime());
>   json.writeNumberField(PERCENT_COMPLETE, view.getPercentComplete());
>   json.writeArrayFieldStart(PHASES);
>   ...
> {code}
> We can reuse the instance and reduce {{JsonFactory instance}} allocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10658) Reduce JsonFactory instance allocation in StartupProgressServlet

2016-07-20 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10658:
-
Attachment: HDFS-10658.001.patch

> Reduce JsonFactory instance allocation in StartupProgressServlet
> 
>
> Key: HDFS-10658
> URL: https://issues.apache.org/jira/browse/HDFS-10658
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10658.001.patch
>
>
> Now in class {{StartupProgressServlet}}, it will always create a new 
> {{JsonFactory}} instance to create a JsonGenerator. The codes:
> {code}
>   protected void doGet(HttpServletRequest req, HttpServletResponse resp)
>   throws IOException {
> resp.setContentType("application/json; charset=UTF-8");
> StartupProgress prog = NameNodeHttpServer.getStartupProgressFromContext(
>   getServletContext());
> StartupProgressView view = prog.createView();
> JsonGenerator json = new 
> JsonFactory().createJsonGenerator(resp.getWriter());
> try {
>   json.writeStartObject();
>   json.writeNumberField(ELAPSED_TIME, view.getElapsedTime());
>   json.writeNumberField(PERCENT_COMPLETE, view.getPercentComplete());
>   json.writeArrayFieldStart(PHASES);
>   ...
> {code}
> We can reuse the instance and reduce {{JsonFactory instance}} allocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10658) Reduce JsonFactory instance allocation in StartupProgressServlet

2016-07-20 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10658:
-
Description: 
Now in class {{StartupProgressServlet}}, it will always create a new 
{{JsonFactory}} instance to create a JsonGenerator. The codes:
{code}
  protected void doGet(HttpServletRequest req, HttpServletResponse resp)
  throws IOException {
resp.setContentType("application/json; charset=UTF-8");
StartupProgress prog = NameNodeHttpServer.getStartupProgressFromContext(
  getServletContext());
StartupProgressView view = prog.createView();
JsonGenerator json = new 
JsonFactory().createJsonGenerator(resp.getWriter());
try {
  json.writeStartObject();
  json.writeNumberField(ELAPSED_TIME, view.getElapsedTime());
  json.writeNumberField(PERCENT_COMPLETE, view.getPercentComplete());
  json.writeArrayFieldStart(PHASES);
  ...
{code}
We can reuse the instance and reduce {{JsonFactory}} instance allocation.

  was:
Now in class {{StartupProgressServlet}}, it will always create a new 
{{JsonFactory}} instance to create a JsonGenerator. The codes:
{code}
  protected void doGet(HttpServletRequest req, HttpServletResponse resp)
  throws IOException {
resp.setContentType("application/json; charset=UTF-8");
StartupProgress prog = NameNodeHttpServer.getStartupProgressFromContext(
  getServletContext());
StartupProgressView view = prog.createView();
JsonGenerator json = new 
JsonFactory().createJsonGenerator(resp.getWriter());
try {
  json.writeStartObject();
  json.writeNumberField(ELAPSED_TIME, view.getElapsedTime());
  json.writeNumberField(PERCENT_COMPLETE, view.getPercentComplete());
  json.writeArrayFieldStart(PHASES);
  ...
{code}
We can reuse the instance and reduce {{JsonFactory instance}} allocation.


> Reduce JsonFactory instance allocation in StartupProgressServlet
> 
>
> Key: HDFS-10658
> URL: https://issues.apache.org/jira/browse/HDFS-10658
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10658.001.patch
>
>
> Now in class {{StartupProgressServlet}}, it will always create a new 
> {{JsonFactory}} instance to create a JsonGenerator. The codes:
> {code}
>   protected void doGet(HttpServletRequest req, HttpServletResponse resp)
>   throws IOException {
> resp.setContentType("application/json; charset=UTF-8");
> StartupProgress prog = NameNodeHttpServer.getStartupProgressFromContext(
>   getServletContext());
> StartupProgressView view = prog.createView();
> JsonGenerator json = new 
> JsonFactory().createJsonGenerator(resp.getWriter());
> try {
>   json.writeStartObject();
>   json.writeNumberField(ELAPSED_TIME, view.getElapsedTime());
>   json.writeNumberField(PERCENT_COMPLETE, view.getPercentComplete());
>   json.writeArrayFieldStart(PHASES);
>   ...
> {code}
> We can reuse the instance and reduce {{JsonFactory}} instance allocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10658) Reduce JsonFactory instance allocation in StartupProgressServlet

2016-07-20 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10658:
-
Description: 
Now in class {{StartupProgressServlet}}, it will always create a new 
{{JsonFactory}} instance to create a JsonGenerator. The codes:
{code}
  protected void doGet(HttpServletRequest req, HttpServletResponse resp)
  throws IOException {
resp.setContentType("application/json; charset=UTF-8");
StartupProgress prog = NameNodeHttpServer.getStartupProgressFromContext(
  getServletContext());
StartupProgressView view = prog.createView();
JsonGenerator json = new 
JsonFactory().createJsonGenerator(resp.getWriter());
try {
  json.writeStartObject();
  json.writeNumberField(ELAPSED_TIME, view.getElapsedTime());
  json.writeNumberField(PERCENT_COMPLETE, view.getPercentComplete());
  json.writeArrayFieldStart(PHASES);
  ...
{code}
We can reuse the instance and reduce {{JsonFactory}} instance allocation.

{{JsonFactory}} is also a heavy-weight object like {{ObjectMapper}}. Can see 
this related doc, 
https://github.com/FasterXML/jackson-docs/wiki/Presentation:-Jackson-Performance.

  was:
Now in class {{StartupProgressServlet}}, it will always create a new 
{{JsonFactory}} instance to create a JsonGenerator. The codes:
{code}
  protected void doGet(HttpServletRequest req, HttpServletResponse resp)
  throws IOException {
resp.setContentType("application/json; charset=UTF-8");
StartupProgress prog = NameNodeHttpServer.getStartupProgressFromContext(
  getServletContext());
StartupProgressView view = prog.createView();
JsonGenerator json = new 
JsonFactory().createJsonGenerator(resp.getWriter());
try {
  json.writeStartObject();
  json.writeNumberField(ELAPSED_TIME, view.getElapsedTime());
  json.writeNumberField(PERCENT_COMPLETE, view.getPercentComplete());
  json.writeArrayFieldStart(PHASES);
  ...
{code}
We can reuse the instance and reduce {{JsonFactory}} instance allocation.


> Reduce JsonFactory instance allocation in StartupProgressServlet
> 
>
> Key: HDFS-10658
> URL: https://issues.apache.org/jira/browse/HDFS-10658
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10658.001.patch
>
>
> Now in class {{StartupProgressServlet}}, it will always create a new 
> {{JsonFactory}} instance to create a JsonGenerator. The codes:
> {code}
>   protected void doGet(HttpServletRequest req, HttpServletResponse resp)
>   throws IOException {
> resp.setContentType("application/json; charset=UTF-8");
> StartupProgress prog = NameNodeHttpServer.getStartupProgressFromContext(
>   getServletContext());
> StartupProgressView view = prog.createView();
> JsonGenerator json = new 
> JsonFactory().createJsonGenerator(resp.getWriter());
> try {
>   json.writeStartObject();
>   json.writeNumberField(ELAPSED_TIME, view.getElapsedTime());
>   json.writeNumberField(PERCENT_COMPLETE, view.getPercentComplete());
>   json.writeArrayFieldStart(PHASES);
>   ...
> {code}
> We can reuse the instance and reduce {{JsonFactory}} instance allocation.
> {{JsonFactory}} is also a heavy-weight object like {{ObjectMapper}}. Can see 
> this related doc, 
> https://github.com/FasterXML/jackson-docs/wiki/Presentation:-Jackson-Performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385763#comment-15385763
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 368 unchanged - 12 fixed = 368 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818943/HDFS-10301.011.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1dc89d76ac9d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9ccf935 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16095/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16095/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16095/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
>

[jira] [Commented] (HDFS-10655) Fix path related byte array conversion bugs

2016-07-20 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385879#comment-15385879
 ] 

Daryn Sharp commented on HDFS-10655:


Changes overlap so this patch is based on integration of HDFS-10653.

> Fix path related byte array conversion bugs
> ---
>
> Key: HDFS-10655
> URL: https://issues.apache.org/jira/browse/HDFS-10655
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10655.patch
>
>
> {{DFSUtil.bytes2ByteArray}} does not always properly handle runs of multiple 
> separators, nor does it handle relative paths correctly.
> {{DFSUtil.byteArray2PathString}} does not rebuild the path correctly unless 
> the specified range is the entire component array.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10659) Namenode crashes after Journalnode re-installation in an HA cluster due to missing paxos directory

2016-07-20 Thread Amit Anand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Anand updated HDFS-10659:
--
Description: 
In my environment I am seeing {{Namenodes}} crashing down after 
{{Journalnodes}} are re-installed. We manage multiple clusters and do rolling 
upgrades followed by rolling re-install of each node including master(NN, JN, 
RM, ZK) nodes. When a journal node is re-installed or moved to a new disk/host, 
instead of running {{"initializeSharedEdits"}} command, I copy {{VERSION}} file 
from one of the other {{Journalnode}} and that allows my {{NN}} to start 
writing data to the newly installed {{Journalnode}}.

To acheive quorum for JN and recover unfinalized segments NN during starupt 
creates .tmp files under {{"/jn/current/paxos"}} directory . In 
current implementation "paxos" directry is only created during 
{{"initializeSharedEdits"}} command and if a JN is re-installed the "paxos" 
directory is not created upon JN startup or by NN while writing .tmp files 
which causes NN to crash with following error message:

{code}
192.168.100.16:8485: /disk/1/dfs/jn/Test-Laptop/current/paxos/64044.tmp (No 
such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at 
org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.persistPaxosData(Journal.java:971)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:846)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:205)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.acceptRecovery(QJournalProtocolServerSideTranslatorPB.java:249)
at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25435)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
{code}

The current 
[getPaxosFile|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java#L128-L130]
 method simply returns a path to a file under "paxos" directory without 
verifiying its existence. Since "paxos" directoy holds files that are required 
for NN recovery and acheiving JN quorum my proposed solution is to add a check 
to "getPaxosFile" method and create the {{"paxos"}} directory if it is missing.

  was:
In my environment I am seeing {{Namenodes}} crashing down after 
{{Journalnodes}} are re-installed. We manage multiple clusters and do rolling 
upgrades followed by rolling re-install of each node including master(NN, JN, 
RM, ZK) nodes. When a journal node is re-installed or moved to a new disk/host, 
instead of running {{"initializeSharedEdits"}} command, I copy {{VERSION}} file 
from one of the other {{Journalnode}} and that allows my {{NN}} to start 
writing data to the newly installed {{Journalnode}}.

To acheive quorum for JN and recover unfinalized segments NN during starupt 
creates .tmp files under {{"/jn/current/paxos"}} directory . In 
current implementation "paxos" directry is only created during 
{{"initializeSharedEdits"}} command and if a JN is re-installed the "paxos" 
directory is not created upon JN startup or by NN while writing .tmp files 
which causes NN to crash with following error message:

{code}
192.168.100.16:8485: /disk/1/dfs/jn/Test-Laptop/current/paxos/64044.tmp (No 
such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at 
org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.persistPaxosData(Journal.java:971)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:846)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:205)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB

[jira] [Updated] (HDFS-10659) Namenode crashes after Journalnode re-installation in an HA cluster due to missing paxos directory

2016-07-20 Thread Amit Anand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Anand updated HDFS-10659:
--
Summary: Namenode crashes after Journalnode re-installation in an HA 
cluster due to missing paxos directory  (was: Namenode crashes after 
Journalnode re-installation in an HA cluster)

> Namenode crashes after Journalnode re-installation in an HA cluster due to 
> missing paxos directory
> --
>
> Key: HDFS-10659
> URL: https://issues.apache.org/jira/browse/HDFS-10659
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, journal-node
>Affects Versions: 2.7.1
>Reporter: Amit Anand
>
> In my environment I am seeing {{Namenodes}} crashing down after 
> {{Journalnodes}} are re-installed. We manage multiple clusters and do rolling 
> upgrades followed by rolling re-install of each node including master(NN, JN, 
> RM, ZK) nodes. When a journal node is re-installed or moved to a new 
> disk/host, instead of running {{"initializeSharedEdits"}} command, I copy 
> {{VERSION}} file from one of the other {{Journalnode}} and that allows my 
> {{NN}} to start writing data to the newly installed {{Journalnode}}.
> To acheive quorum for JN and recover unfinalized segments NN during starupt 
> creates .tmp files under {{"/jn/current/paxos"}} directory . In 
> current implementation "paxos" directry is only created during 
> {{"initializeSharedEdits"}} command and if a JN is re-installed the "paxos" 
> directory is not created upon JN startup or by NN while writing .tmp 
> files which causes NN to crash with following error message:
> {code}
> 192.168.100.16:8485: /disk/1/dfs/jn/Test-Laptop/current/paxos/64044.tmp (No 
> such file or directory)
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:221)
> at java.io.FileOutputStream.(FileOutputStream.java:171)
> at 
> org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.persistPaxosData(Journal.java:971)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:846)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:205)
> at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.acceptRecovery(QJournalProtocolServerSideTranslatorPB.java:249)
> at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25435)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
> {code}
> The current 
> [getPaxosFile|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java#L128-L130]
>  method simply returns a path to a file under "paxos" directory without 
> verifiying its existence. Since "paxos" directoy holds files that are 
> required for NN recovery and acheiving JN quorum my proposed solution is to 
> add a check to "getPaxosFile" method and create the "paxos" directory if it 
> is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10659) Namenode crashes after Journalnode re-installation in an HA cluster

2016-07-20 Thread Amit Anand (JIRA)
Amit Anand created HDFS-10659:
-

 Summary: Namenode crashes after Journalnode re-installation in an 
HA cluster
 Key: HDFS-10659
 URL: https://issues.apache.org/jira/browse/HDFS-10659
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, journal-node
Affects Versions: 2.7.1
Reporter: Amit Anand


In my environment I am seeing {{Namenodes}} crashing down after 
{{Journalnodes}} are re-installed. We manage multiple clusters and do rolling 
upgrades followed by rolling re-install of each node including master(NN, JN, 
RM, ZK) nodes. When a journal node is re-installed or moved to a new disk/host, 
instead of running {{"initializeSharedEdits"}} command, I copy {{VERSION}} file 
from one of the other {{Journalnode}} and that allows my {{NN}} to start 
writing data to the newly installed {{Journalnode}}.

To acheive quorum for JN and recover unfinalized segments NN during starupt 
creates .tmp files under {{"/jn/current/paxos"}} directory . In 
current implementation "paxos" directry is only created during 
{{"initializeSharedEdits"}} command and if a JN is re-installed the "paxos" 
directory is not created upon JN startup or by NN while writing .tmp files 
which causes NN to crash with following error message:

{code}
192.168.100.16:8485: /disk/1/dfs/jn/Test-Laptop/current/paxos/64044.tmp (No 
such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at 
org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.persistPaxosData(Journal.java:971)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:846)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:205)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.acceptRecovery(QJournalProtocolServerSideTranslatorPB.java:249)
at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25435)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
{code}

The current 
[getPaxosFile|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java#L128-L130]
 method simply returns a path to a file under "paxos" directory without 
verifiying its existence. Since "paxos" directoy holds files that are required 
for NN recovery and acheiving JN quorum my proposed solution is to add a check 
to "getPaxosFile" method and create the "paxos" directory if it is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10659) Namenode crashes after Journalnode re-installation in an HA cluster due to missing paxos directory

2016-07-20 Thread Amit Anand (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385951#comment-15385951
 ] 

Amit Anand commented on HDFS-10659:
---

Steps to reproduce
===
1. Configure an HA cluster with at least 3 JNs 
2. Shutdown 1st JN and move JN current directory to current.bak
3. Recreate current directory with correct permissions and copy VERSION file 
from current.bak to current (do not create paxos directory)
4. Shutdown 2nd JN and repeat step 2 and 3
5. Watch NN logs and see how NN crashes due to missing paxos directory

To recover your cluster

1. Create "paxos" directory under JN current directory (make sure permissions 
are set correctly)
2. Restart JNs
3. Restart NNs

> Namenode crashes after Journalnode re-installation in an HA cluster due to 
> missing paxos directory
> --
>
> Key: HDFS-10659
> URL: https://issues.apache.org/jira/browse/HDFS-10659
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, journal-node
>Affects Versions: 2.7.1
>Reporter: Amit Anand
>
> In my environment I am seeing {{Namenodes}} crashing down after 
> {{Journalnodes}} are re-installed. We manage multiple clusters and do rolling 
> upgrades followed by rolling re-install of each node including master(NN, JN, 
> RM, ZK) nodes. When a journal node is re-installed or moved to a new 
> disk/host, instead of running {{"initializeSharedEdits"}} command, I copy 
> {{VERSION}} file from one of the other {{Journalnode}} and that allows my 
> {{NN}} to start writing data to the newly installed {{Journalnode}}.
> To acheive quorum for JN and recover unfinalized segments NN during starupt 
> creates .tmp files under {{"/jn/current/paxos"}} directory . In 
> current implementation "paxos" directry is only created during 
> {{"initializeSharedEdits"}} command and if a JN is re-installed the "paxos" 
> directory is not created upon JN startup or by NN while writing .tmp 
> files which causes NN to crash with following error message:
> {code}
> 192.168.100.16:8485: /disk/1/dfs/jn/Test-Laptop/current/paxos/64044.tmp (No 
> such file or directory)
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:221)
> at java.io.FileOutputStream.(FileOutputStream.java:171)
> at 
> org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.persistPaxosData(Journal.java:971)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:846)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:205)
> at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.acceptRecovery(QJournalProtocolServerSideTranslatorPB.java:249)
> at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25435)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
> {code}
> The current 
> [getPaxosFile|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java#L128-L130]
>  method simply returns a path to a file under "paxos" directory without 
> verifiying its existence. Since "paxos" directoy holds files that are 
> required for NN recovery and acheiving JN quorum my proposed solution is to 
> add a check to "getPaxosFile" method and create the {{"paxos"}} directory if 
> it is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385954#comment-15385954
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 368 unchanged - 12 fixed = 368 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818943/HDFS-10301.011.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0abbdfa64137 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 37362c2 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16096/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16096/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16096/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> -

[jira] [Commented] (HDFS-10658) Reduce JsonFactory instance allocation in StartupProgressServlet

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386146#comment-15386146
 ] 

Hadoop QA commented on HDFS-10658:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 2 unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 
12s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819070/HDFS-10658.001.patch |
| JIRA Issue | HDFS-10658 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1a1468238fa8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 37362c2 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16098/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16098/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce JsonFactory instance allocation in StartupProgressServlet
> 
>
> Key: HDFS-10658
> URL: https://issues.apache.org/jira/browse/HDFS-10658
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10658.001.patch

[jira] [Updated] (HDFS-10656) Optimize conversion of byte arrays back to path string

2016-07-20 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-10656:
---
Attachment: HDFS-10656.patch

> Optimize conversion of byte arrays back to path string
> --
>
> Key: HDFS-10656
> URL: https://issues.apache.org/jira/browse/HDFS-10656
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10656.patch
>
>
> {{DFSUtil.byteArray2PathString}} generates excessive object allocation.
> # each byte array is encoded to a string (copy)
> # string appended to a builder which extracts the chars from the intermediate 
> string (copy) and adds to its own char array
> # builder's char array is re-alloced if over 16 chars (copy)
> # builder's toString creates another string (copy)
> Instead of allocating all these objects and performing multiple byte/char 
> encoding/decoding conversions, the byte array can be built in-place with a 
> single final conversion to a string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386156#comment-15386156
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 368 unchanged - 12 fixed = 368 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818943/HDFS-10301.011.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2d088f995b16 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 37362c2 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16097/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16097/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16097/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> -

[jira] [Commented] (HDFS-10656) Optimize conversion of byte arrays back to path string

2016-07-20 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386154#comment-15386154
 ] 

Daryn Sharp commented on HDFS-10656:


overlaps

> Optimize conversion of byte arrays back to path string
> --
>
> Key: HDFS-10656
> URL: https://issues.apache.org/jira/browse/HDFS-10656
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10656.patch
>
>
> {{DFSUtil.byteArray2PathString}} generates excessive object allocation.
> # each byte array is encoded to a string (copy)
> # string appended to a builder which extracts the chars from the intermediate 
> string (copy) and adds to its own char array
> # builder's char array is re-alloced if over 16 chars (copy)
> # builder's toString creates another string (copy)
> Instead of allocating all these objects and performing multiple byte/char 
> encoding/decoding conversions, the byte array can be built in-place with a 
> single final conversion to a string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-10657) testAclCLI.xml inherit default ACL to dir test should expect mask r-x

2016-07-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-10657 started by John Zhuge.
-
> testAclCLI.xml inherit default ACL to dir test should expect mask r-x
> -
>
> Key: HDFS-10657
> URL: https://issues.apache.org/jira/browse/HDFS-10657
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> The following test case should expect {{mask::r-x}} ACL entry instead of 
> {{mask::rwx}}:
> {code:xml}
>   setfacl : check inherit default ACL to dir
>   
> -fs NAMENODE -mkdir /dir1
> -fs NAMENODE -setfacl -m 
> default:user:charlie:r-x,default:group:admin:rwx /dir1
> -fs NAMENODE -mkdir /dir1/dir2
> -fs NAMENODE -getfacl /dir1/dir2
> ...
> 
>   SubstringComparator
>   mask::rwx
> 
> {code}
> But why does it pass? Because the comparator type is {{SubstringComparator}} 
> and it matches the wrong line {{default:mask::rwx}} in the output of 
> {{getfacl}}:
> {noformat}
> # file: /dir1/dir2
> # owner: jzhuge
> # group: supergroup
> user::rwx
> user:charlie:r-x
> group::r-x
> group:admin:rwx   #effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:charlie:r-x
> default:group::r-x
> default:group:admin:rwx
> default:mask::rwx
> default:other::r-x
> {noformat}
> The comparator should match the entire line instead of just substring. Other 
> comparators in {{testAclCLI.xml}} have the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10613) Wrong real size when a DSQuotaExceededException occur

2016-07-20 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386213#comment-15386213
 ] 

Chen Liang commented on HDFS-10613:
---

Hi Xiaohe,

Yes! That's exactly how you got 384M : 3 block replications, each of size 128 
M, 3 * 128 = 384.


> Wrong real size when a DSQuotaExceededException occur
> -
>
> Key: HDFS-10613
> URL: https://issues.apache.org/jira/browse/HDFS-10613
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2
> Environment: Linux x86_64
>Reporter: Xiaohe Lan
>Assignee: Chen Liang
>Priority: Minor
>
> When put a file from local larger than the quota of  a HDFS directory, there 
> will be a DSQuotaExceededException, the diskspace consumed in the error 
> message seems to be unreasonable.
> Why the diskspace consumed is 384M while test.zip is a 14M file ?
> {code}
> bash-4.1$ ls -lh test.zip
> -rw-r--r-- 1 xilan dba 14M Jul 12 00:54 test.zip
> bash-4.1$ hdfs dfs -mkdir /user/foobar
> bash-4.1$ hdfs dfsadmin -setSpaceQuota 10m /user/foobar
> bash-4.1$ hdfs dfs -put test.zip /user/foobar/
> 16/07/12 00:57:11 WARN hdfs.DFSClient: DataStreamer Exception
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
> of /user/foobar is exceeded: quota = 10485760 B = 10 MB but diskspace 
> consumed = 402653184 B = 384 MB
>   at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211)
>   at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:874)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:707)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:666)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:491)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3571)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.storeAllocatedBlock(FSNamesystem.java:3157)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3038)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1462)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1255)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10660) Expose storage policy apis via HDFSAdmin interface

2016-07-20 Thread Rakesh R (JIRA)
Rakesh R created HDFS-10660:
---

 Summary: Expose storage policy apis via HDFSAdmin interface
 Key: HDFS-10660
 URL: https://issues.apache.org/jira/browse/HDFS-10660
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Rakesh R
Assignee: Rakesh R


Presently, {{org.apache.hadoop.hdfs.client.HdfsAdmin.java}} interface has only 
{{#setStoragePolicy()}} API exposed. This jira is to add the following set of 
apis into HdfsAdmin.

{code}
HdfsAdmin#unsetStoragePolicy
HdfsAdmin#getStoragePolicy
HdfsAdmin#getAllStoragePolicies
{code}

Thanks [~arpitagarwal] for the offline discussions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10616) Improve performance of path handling

2016-07-20 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386239#comment-15386239
 ] 

Zhe Zhang commented on HDFS-10616:
--

Thanks much for posting the JIRAs / patches Daryn.

Quick question: is 2.9 just a temporary target version? Ideally I'd like to 
apply those optimizations to 2.6.

> Improve performance of path handling
> 
>
> Key: HDFS-10616
> URL: https://issues.apache.org/jira/browse/HDFS-10616
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> Path handling in the namesystem and directory is very inefficient.  The path 
> is repeatedly resolved, decomposed into path components, recombined to a full 
> path. parsed again, throughout the system.  This is directly inefficient for 
> general performance, and indirectly via unnecessary pressure on young gen GC.
> The namesystem should only operate on paths, parse it once into inodes, and 
> the directory should only operate on inodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2016-07-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved HDFS-8914.
---
Resolution: Fixed

Closing this again.

> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10658) Reduce JsonFactory instance allocation in StartupProgressServlet

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386258#comment-15386258
 ] 

Hadoop QA commented on HDFS-10658:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 2 unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 
22s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819070/HDFS-10658.001.patch |
| JIRA Issue | HDFS-10658 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 50fd7f925b7d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1c9d2ab |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16100/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16100/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce JsonFactory instance allocation in StartupProgressServlet
> 
>
> Key: HDFS-10658
> URL: https://issues.apache.org/jira/browse/HDFS-10658
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10658.001.patch

[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386262#comment-15386262
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 368 unchanged - 12 fixed = 368 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818943/HDFS-10301.011.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1697f8ceb2a6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1c9d2ab |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16099/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16099/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16099/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-103

[jira] [Commented] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-07-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386270#comment-15386270
 ] 

Akira Ajisaka commented on HDFS-10425:
--

LGTM, +1.

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch, HDFS-10425.02.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-07-20 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10425:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~boky01] for the clean up.

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10425.01.patch, HDFS-10425.02.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-07-20 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10425:
-
Priority: Minor  (was: Trivial)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10425.01.patch, HDFS-10425.02.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-07-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-10544:
---
Fix Version/s: (was: 2.7.3)
   2.7.4

2.7.3 was already under process, changing the fix-version to 2.7.4.

> Balancer doesn't work with IPFailoverProxyProvider
> --
>
> Key: HDFS-10544
> URL: https://issues.apache.org/jira/browse/HDFS-10544
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, ha
>Affects Versions: 2.6.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0, 2.9.0, 2.6.5, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10544-branch-2.7.patch, HDFS-10544.00.patch, 
> HDFS-10544.01.patch, HDFS-10544.02.patch, HDFS-10544.03.patch, 
> HDFS-10544.04.patch, HDFS-10544.05.patch
>
>
> Right now {{Balancer}} gets the NN URIs through 
> {{DFSUtil#getNameServiceUris}}, which returns logical URIs in HA is enabled. 
> If {{IPFailoverProxyProvider}} is used, {{Balancer}} will not be able to 
> start.
> I think the bug is at {{DFSUtil#getNameServiceUris}}:
> {code}
> for (String nsId : getNameServiceIds(conf)) {
>   if (HAUtil.isHAEnabled(conf, nsId)) {
> // Add the logical URI of the nameservice.
> try {
>   ret.add(new URI(HdfsConstants.HDFS_URI_SCHEME + "://" + nsId));
> {code}
> Then {{if}} clause should also consider if the {{FailoverProxyProvider}} has 
> {{useLogicalURI}} enabled. If not, {{getNameServiceUris}} should try to 
> resolve the physical URI for this nsId.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10661) Make MiniDFSCluster AutoCloseable

2016-07-20 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-10661:


 Summary: Make MiniDFSCluster AutoCloseable
 Key: HDFS-10661
 URL: https://issues.apache.org/jira/browse/HDFS-10661
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Akira Ajisaka


If we make MiniDFSCluster AutoCloseable, we can create MiniDFSCluster instance 
using try-with-resources statement. That way we don't have to shutdown the 
cluster in finally clause every time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386306#comment-15386306
 ] 

Colin P. McCabe commented on HDFS-10301:


bq. [~redvine] asked: Colin P. McCabe Doesn't TCP ignore duplicate packets? Can 
you explain how this can happen? If the RPC does get duplicated, then we 
shouldn't return true right when node.leaseId == 0 ?

That is a fair point.  However, the retry logic in the RPC system could resend 
the message if the NN did not respond within a certain amount of time.  Or 
there could just be a bug which leads to the DN sending full block reports when 
it shouldn't.  In any case, we cannot assume that reordered messages are the 
problem.

bq. [~shv] wrote:  Also I think that Colin P. McCabe's veto, formulated as I am 
-1 on a patch which adds extra RPCs. is fully addressed now. The storage report 
was added to the last RPC representing a single block report. The last patch 
does not add extra RPCs.

Yes, this patch addresses my concerns.  I withdraw my -1.

bq. [~shv] wrote: The storage ids are already there in current BR protobuf. Why 
would you want a new field for that. You will need to duplicate all storage ids 
in case of full block report, when it is not split into multiple RPCs. Seems 
confusing and inefficient to me.

A new field would be best because we would avoid creating fake BlockListAsLong 
objects with length -1, and re-using protobuf fields for purposes they weren't 
intended for.  A list of storage IDs is not a block report or a list of blocks, 
and using the same data structures is very confusing.  If you want to optimize 
by not sending the list of storage reports separately when the block report has 
only one RPC, that's easy to do.  Just check if numRpcs == 1 and don't set or 
check the optional list of strings in that case.  I'm not going to block the 
patch over this, but I do think people reading this will wonder what you were 
thinking if you overload the PB fields in this way.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10662) Optimize UTF8 string/byte conversions

2016-07-20 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-10662:
--

 Summary: Optimize UTF8 string/byte conversions
 Key: HDFS-10662
 URL: https://issues.apache.org/jira/browse/HDFS-10662
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: Daryn Sharp
Assignee: Daryn Sharp


String/byte conversions may take either a Charset instance or its canonical 
name.  One might think a Charset instance would be faster due to avoiding a 
lookup and instantiation of a Charset, but it's not.  The canonical string name 
variants will cache the string encoder/decoder (obtained from a Charset) 
resulting in better performance.

LOG4J2-935 describes a real-world performance boost.  I micro-benched a 
marginal runtime improvement on jdk 7/8.  However for a 16 byte path, using the 
canonical name generated 50% less garbage.  For a 64 byte path, 25% of the 
garbage.  Given the sheer number of times that paths are (re)parsed, the cost 
adds up quickly.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386342#comment-15386342
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 368 unchanged - 12 fixed = 368 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.ha.TestHAFsck |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818943/HDFS-10301.011.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6d72422a28d7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1c9d2ab |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16101/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16101/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16101/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> ---

[jira] [Commented] (HDFS-10616) Improve performance of path handling

2016-07-20 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386358#comment-15386358
 ] 

Daryn Sharp commented on HDFS-10616:


I chose 2.9 solely because branch-2 and trunk have drifted often quite a bit 
from 2.7 and earlier.  Given the sheer volume of internal optimizations I'm 
pushing out (I've barely started), I don't have the time to back-port but feel 
free to pitch in if you like!

> Improve performance of path handling
> 
>
> Key: HDFS-10616
> URL: https://issues.apache.org/jira/browse/HDFS-10616
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> Path handling in the namesystem and directory is very inefficient.  The path 
> is repeatedly resolved, decomposed into path components, recombined to a full 
> path. parsed again, throughout the system.  This is directly inefficient for 
> general performance, and indirectly via unnecessary pressure on young gen GC.
> The namesystem should only operate on paths, parse it once into inodes, and 
> the directory should only operate on inodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10663) Comparison of two System.nanoTimes methods return values are against standard java recoemmendations.

2016-07-20 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-10663:
-

 Summary: Comparison of two System.nanoTimes methods return values 
are against standard java recoemmendations.
 Key: HDFS-10663
 URL: https://issues.apache.org/jira/browse/HDFS-10663
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah


I was chasing a bug where the namenode didn't declare a datanode dead even when 
the last contact time was 2.5 hours before.
Before I could debug, the datanode was re-imaged (all the logs were deleted) 
and the namenode was upgraded to new software.
While debugging, I came across this heartbeat check code where the comparison 
of two System.nanoTime is against the java recommended way.
Here is the hadoop code:
{code:title=DatanodeManager.java|borderStyle=solid}

  /** Is the datanode dead? */
  boolean isDatanodeDead(DatanodeDescriptor node) {
return (node.getLastUpdateMonotonic() <
(monotonicNow() - heartbeatExpireInterval));
  }
{code}

The montonicNow() is calculated as:
{code:title=Time.java|borderStyle=solid}
  public static long monotonicNow() {
final long NANOSECONDS_PER_MILLISECOND = 100;

return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
  }
{code}

As per javadoc of System.nanoTime, it is clearly stated that we should subtract 
two nano time output 
{noformat}
To compare two nanoTime values

 long t0 = System.nanoTime();
 ...
 long t1 = System.nanoTime();
one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
numerical overflow.
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10660) Expose storage policy apis via HDFSAdmin interface

2016-07-20 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10660:

Attachment: HDFS-10660-00.patch

> Expose storage policy apis via HDFSAdmin interface
> --
>
> Key: HDFS-10660
> URL: https://issues.apache.org/jira/browse/HDFS-10660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10660-00.patch
>
>
> Presently, {{org.apache.hadoop.hdfs.client.HdfsAdmin.java}} interface has 
> only {{#setStoragePolicy()}} API exposed. This jira is to add the following 
> set of apis into HdfsAdmin.
> {code}
> HdfsAdmin#unsetStoragePolicy
> HdfsAdmin#getStoragePolicy
> HdfsAdmin#getAllStoragePolicies
> {code}
> Thanks [~arpitagarwal] for the offline discussions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10663) Comparison of two System.nanoTime methods return values are against standard java recoemmendations.

2016-07-20 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10663:
--
Summary: Comparison of two System.nanoTime methods return values are 
against standard java recoemmendations.  (was: Comparison of two 
System.nanoTimes methods return values are against standard java 
recoemmendations.)

> Comparison of two System.nanoTime methods return values are against standard 
> java recoemmendations.
> ---
>
> Key: HDFS-10663
> URL: https://issues.apache.org/jira/browse/HDFS-10663
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>
> I was chasing a bug where the namenode didn't declare a datanode dead even 
> when the last contact time was 2.5 hours before.
> Before I could debug, the datanode was re-imaged (all the logs were deleted) 
> and the namenode was upgraded to new software.
> While debugging, I came across this heartbeat check code where the comparison 
> of two System.nanoTime is against the java recommended way.
> Here is the hadoop code:
> {code:title=DatanodeManager.java|borderStyle=solid}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdateMonotonic() <
> (monotonicNow() - heartbeatExpireInterval));
>   }
> {code}
> The montonicNow() is calculated as:
> {code:title=Time.java|borderStyle=solid}
>   public static long monotonicNow() {
> final long NANOSECONDS_PER_MILLISECOND = 100;
> return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
>   }
> {code}
> As per javadoc of System.nanoTime, it is clearly stated that we should 
> subtract two nano time output 
> {noformat}
> To compare two nanoTime values
>  long t0 = System.nanoTime();
>  ...
>  long t1 = System.nanoTime();
> one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
> numerical overflow.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10661) Make MiniDFSCluster AutoCloseable

2016-07-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HDFS-10661.
---
Resolution: Duplicate

[~ajisakaa] Looks like a dup of HDFS-10287. Please re-open if you think 
otherwise.

> Make MiniDFSCluster AutoCloseable
> -
>
> Key: HDFS-10661
> URL: https://issues.apache.org/jira/browse/HDFS-10661
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Akira Ajisaka
>
> If we make MiniDFSCluster AutoCloseable, we can create MiniDFSCluster 
> instance using try-with-resources statement. That way we don't have to 
> shutdown the cluster in finally clause every time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10663) Comparison of two System.nanoTime methods return values are against standard java recoemmendations.

2016-07-20 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10663:
--
Description: 
I was chasing a bug where the namenode didn't declare a datanode dead even when 
the last contact time was 2.5 hours before.
Before I could debug, the datanode was re-imaged (all the logs were deleted) 
and the namenode was restarted and upgraded to new software.
While debugging, I came across this heartbeat check code where the comparison 
of two System.nanoTime is against the java recommended way.
Here is the hadoop code:
{code:title=DatanodeManager.java|borderStyle=solid}

  /** Is the datanode dead? */
  boolean isDatanodeDead(DatanodeDescriptor node) {
return (node.getLastUpdateMonotonic() <
(monotonicNow() - heartbeatExpireInterval));
  }
{code}

The montonicNow() is calculated as:
{code:title=Time.java|borderStyle=solid}
  public static long monotonicNow() {
final long NANOSECONDS_PER_MILLISECOND = 100;

return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
  }
{code}

As per javadoc of System.nanoTime, it is clearly stated that we should subtract 
two nano time output 
{noformat}
To compare two nanoTime values

 long t0 = System.nanoTime();
 ...
 long t1 = System.nanoTime();
one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
numerical overflow.
{noformat}


  was:
I was chasing a bug where the namenode didn't declare a datanode dead even when 
the last contact time was 2.5 hours before.
Before I could debug, the datanode was re-imaged (all the logs were deleted) 
and the namenode was upgraded to new software.
While debugging, I came across this heartbeat check code where the comparison 
of two System.nanoTime is against the java recommended way.
Here is the hadoop code:
{code:title=DatanodeManager.java|borderStyle=solid}

  /** Is the datanode dead? */
  boolean isDatanodeDead(DatanodeDescriptor node) {
return (node.getLastUpdateMonotonic() <
(monotonicNow() - heartbeatExpireInterval));
  }
{code}

The montonicNow() is calculated as:
{code:title=Time.java|borderStyle=solid}
  public static long monotonicNow() {
final long NANOSECONDS_PER_MILLISECOND = 100;

return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
  }
{code}

As per javadoc of System.nanoTime, it is clearly stated that we should subtract 
two nano time output 
{noformat}
To compare two nanoTime values

 long t0 = System.nanoTime();
 ...
 long t1 = System.nanoTime();
one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
numerical overflow.
{noformat}



> Comparison of two System.nanoTime methods return values are against standard 
> java recoemmendations.
> ---
>
> Key: HDFS-10663
> URL: https://issues.apache.org/jira/browse/HDFS-10663
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>
> I was chasing a bug where the namenode didn't declare a datanode dead even 
> when the last contact time was 2.5 hours before.
> Before I could debug, the datanode was re-imaged (all the logs were deleted) 
> and the namenode was restarted and upgraded to new software.
> While debugging, I came across this heartbeat check code where the comparison 
> of two System.nanoTime is against the java recommended way.
> Here is the hadoop code:
> {code:title=DatanodeManager.java|borderStyle=solid}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdateMonotonic() <
> (monotonicNow() - heartbeatExpireInterval));
>   }
> {code}
> The montonicNow() is calculated as:
> {code:title=Time.java|borderStyle=solid}
>   public static long monotonicNow() {
> final long NANOSECONDS_PER_MILLISECOND = 100;
> return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
>   }
> {code}
> As per javadoc of System.nanoTime, it is clearly stated that we should 
> subtract two nano time output 
> {noformat}
> To compare two nanoTime values
>  long t0 = System.nanoTime();
>  ...
>  long t1 = System.nanoTime();
> one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
> numerical overflow.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10660) Expose storage policy apis via HDFSAdmin interface

2016-07-20 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386374#comment-15386374
 ] 

Rakesh R commented on HDFS-10660:
-

Attached patch to support storage policy apis in {{HdfsAdmin}}. Appreciate 
reviews, thanks!

> Expose storage policy apis via HDFSAdmin interface
> --
>
> Key: HDFS-10660
> URL: https://issues.apache.org/jira/browse/HDFS-10660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10660-00.patch
>
>
> Presently, {{org.apache.hadoop.hdfs.client.HdfsAdmin.java}} interface has 
> only {{#setStoragePolicy()}} API exposed. This jira is to add the following 
> set of apis into HdfsAdmin.
> {code}
> HdfsAdmin#unsetStoragePolicy
> HdfsAdmin#getStoragePolicy
> HdfsAdmin#getAllStoragePolicies
> {code}
> Thanks [~arpitagarwal] for the offline discussions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10663) Comparison of two System.nanoTime methods return values are against standard java recoemmendations.

2016-07-20 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-10663:
--
Description: 
I was chasing a bug where the namenode didn't declare a datanode dead even when 
the last contact time was 2.5 hours before.
Before I could debug, the datanode was re-imaged (all the logs were deleted) 
and the namenode was restarted and upgraded to new software.
While debugging, I came across this heartbeat check code where the comparison 
of two System.nanoTime is against the java's recommended way.
Here is the hadoop code:
{code:title=DatanodeManager.java|borderStyle=solid}

  /** Is the datanode dead? */
  boolean isDatanodeDead(DatanodeDescriptor node) {
return (node.getLastUpdateMonotonic() <
(monotonicNow() - heartbeatExpireInterval));
  }
{code}

The montonicNow() is calculated as:
{code:title=Time.java|borderStyle=solid}
  public static long monotonicNow() {
final long NANOSECONDS_PER_MILLISECOND = 100;

return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
  }
{code}

As per javadoc of System.nanoTime, it is clearly stated that we should subtract 
two nano time output 
{noformat}
To compare two nanoTime values

 long t0 = System.nanoTime();
 ...
 long t1 = System.nanoTime();
one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
numerical overflow.
{noformat}


  was:
I was chasing a bug where the namenode didn't declare a datanode dead even when 
the last contact time was 2.5 hours before.
Before I could debug, the datanode was re-imaged (all the logs were deleted) 
and the namenode was restarted and upgraded to new software.
While debugging, I came across this heartbeat check code where the comparison 
of two System.nanoTime is against the java recommended way.
Here is the hadoop code:
{code:title=DatanodeManager.java|borderStyle=solid}

  /** Is the datanode dead? */
  boolean isDatanodeDead(DatanodeDescriptor node) {
return (node.getLastUpdateMonotonic() <
(monotonicNow() - heartbeatExpireInterval));
  }
{code}

The montonicNow() is calculated as:
{code:title=Time.java|borderStyle=solid}
  public static long monotonicNow() {
final long NANOSECONDS_PER_MILLISECOND = 100;

return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
  }
{code}

As per javadoc of System.nanoTime, it is clearly stated that we should subtract 
two nano time output 
{noformat}
To compare two nanoTime values

 long t0 = System.nanoTime();
 ...
 long t1 = System.nanoTime();
one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
numerical overflow.
{noformat}



> Comparison of two System.nanoTime methods return values are against standard 
> java recoemmendations.
> ---
>
> Key: HDFS-10663
> URL: https://issues.apache.org/jira/browse/HDFS-10663
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>
> I was chasing a bug where the namenode didn't declare a datanode dead even 
> when the last contact time was 2.5 hours before.
> Before I could debug, the datanode was re-imaged (all the logs were deleted) 
> and the namenode was restarted and upgraded to new software.
> While debugging, I came across this heartbeat check code where the comparison 
> of two System.nanoTime is against the java's recommended way.
> Here is the hadoop code:
> {code:title=DatanodeManager.java|borderStyle=solid}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdateMonotonic() <
> (monotonicNow() - heartbeatExpireInterval));
>   }
> {code}
> The montonicNow() is calculated as:
> {code:title=Time.java|borderStyle=solid}
>   public static long monotonicNow() {
> final long NANOSECONDS_PER_MILLISECOND = 100;
> return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
>   }
> {code}
> As per javadoc of System.nanoTime, it is clearly stated that we should 
> subtract two nano time output 
> {noformat}
> To compare two nanoTime values
>  long t0 = System.nanoTime();
>  ...
>  long t1 = System.nanoTime();
> one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
> numerical overflow.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10661) Make MiniDFSCluster AutoCloseable

2016-07-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386377#comment-15386377
 ] 

Akira Ajisaka commented on HDFS-10661:
--

Thanks [~jzhuge] for the information and closing this issue. I'll review 
HDFS-10287.

> Make MiniDFSCluster AutoCloseable
> -
>
> Key: HDFS-10661
> URL: https://issues.apache.org/jira/browse/HDFS-10661
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Akira Ajisaka
>
> If we make MiniDFSCluster AutoCloseable, we can create MiniDFSCluster 
> instance using try-with-resources statement. That way we don't have to 
> shutdown the cluster in finally clause every time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10660) Expose storage policy apis via HDFSAdmin interface

2016-07-20 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10660:

Status: Patch Available  (was: Open)

> Expose storage policy apis via HDFSAdmin interface
> --
>
> Key: HDFS-10660
> URL: https://issues.apache.org/jira/browse/HDFS-10660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10660-00.patch
>
>
> Presently, {{org.apache.hadoop.hdfs.client.HdfsAdmin.java}} interface has 
> only {{#setStoragePolicy()}} API exposed. This jira is to add the following 
> set of apis into HdfsAdmin.
> {code}
> HdfsAdmin#unsetStoragePolicy
> HdfsAdmin#getStoragePolicy
> HdfsAdmin#getAllStoragePolicies
> {code}
> Thanks [~arpitagarwal] for the offline discussions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10287) MiniDFSCluster should implement AutoCloseable

2016-07-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386388#comment-15386388
 ] 

Akira Ajisaka commented on HDFS-10287:
--

Hi [~boky01], would you rebase the patch for the latest trunk?

> MiniDFSCluster should implement AutoCloseable
> -
>
> Key: HDFS-10287
> URL: https://issues.apache.org/jira/browse/HDFS-10287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch
>
>
> {{MiniDFSCluster}} should implement {{AutoCloseable}} in order to support 
> [try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
>  It will make test code a little cleaner and more reliable.
> Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
> backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10616) Improve performance of path handling

2016-07-20 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-10616:
---
Attachment: 2.6-2.7.1-heap.png

Here's an illustration how the GC characteristics on a moderated sized and 
lightly loaded NN (by Y! standards) when we upgraded to 2.7 early this year.  
These path changes and forthcoming IPC changes are the primary optimizations 
for returning to 2.6 behavior.  (Note we still had to increase heap sizes when 
upgrading to 2.7, as seen at tail of graph)

> Improve performance of path handling
> 
>
> Key: HDFS-10616
> URL: https://issues.apache.org/jira/browse/HDFS-10616
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: 2.6-2.7.1-heap.png
>
>
> Path handling in the namesystem and directory is very inefficient.  The path 
> is repeatedly resolved, decomposed into path components, recombined to a full 
> path. parsed again, throughout the system.  This is directly inefficient for 
> general performance, and indirectly via unnecessary pressure on young gen GC.
> The namesystem should only operate on paths, parse it once into inodes, and 
> the directory should only operate on inodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10287) MiniDFSCluster should implement AutoCloseable

2016-07-20 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10287:
-
Priority: Minor  (was: Trivial)

> MiniDFSCluster should implement AutoCloseable
> -
>
> Key: HDFS-10287
> URL: https://issues.apache.org/jira/browse/HDFS-10287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch
>
>
> {{MiniDFSCluster}} should implement {{AutoCloseable}} in order to support 
> [try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
>  It will make test code a little cleaner and more reliable.
> Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
> backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-07-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-6962:
-
Attachment: HDFS-6962.007.patch

[~cnauroth] and [~eddyxu] Please review patch 007.

Diff from 006:
* Add CLI tests {{TestAclCLIWithPosixAclInheritance}} based on {{TestAclCLI}}
* No longer add field {{createModes}} to {{INodeWithAdditionalFields}}. 
Instead, add new feature {{CreateModesFeature}} to store create modes. In this 
way, no penalty when POSIX ACL inheritance is disable or for any inode not 
associated with a create request.
* Remove the feature {{CreateModesFeature}} once default ACL has been processed.
* There is added cost of adding and removing the new feature.

TODO:
* Support webhdfs

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10658) Reduce JsonFactory instance allocation in StartupProgressServlet

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386398#comment-15386398
 ] 

Hadoop QA commented on HDFS-10658:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 2 unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 71m 
57s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819070/HDFS-10658.001.patch |
| JIRA Issue | HDFS-10658 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dac006a727bc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1c9d2ab |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16102/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16102/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce JsonFactory instance allocation in StartupProgressServlet
> 
>
> Key: HDFS-10658
> URL: https://issues.apache.org/jira/browse/HDFS-10658
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10658.001.patch

[jira] [Commented] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386406#comment-15386406
 ] 

Hudson commented on HDFS-10425:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #10126 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10126/])
HDFS-10425. Clean up NNStorage and TestSaveNamespace. Contributed by (aajisaka: 
rev 38128baff40ee137376968f025e75827a4227ee7)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java


> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10425.01.patch, HDFS-10425.02.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10649) Remove unused PermissionStatus#applyUMask

2016-07-20 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-10649:
--
Attachment: HDFS-10649.002.patch

> Remove unused PermissionStatus#applyUMask
> -
>
> Key: HDFS-10649
> URL: https://issues.apache.org/jira/browse/HDFS-10649
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Chen Liang
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-10649.001.patch, HDFS-10649.002.patch
>
>
> Class {{PermissionStatus}} is LimitedPrivate("HDFS", "MapReduce") and 
> Unstable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10649) Remove unused PermissionStatus#applyUMask

2016-07-20 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386440#comment-15386440
 ] 

Chen Liang commented on HDFS-10649:
---

Did not notice how applyUMask was implemented in parent class there, fixed it 
in the updated patch.

> Remove unused PermissionStatus#applyUMask
> -
>
> Key: HDFS-10649
> URL: https://issues.apache.org/jira/browse/HDFS-10649
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Chen Liang
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-10649.001.patch, HDFS-10649.002.patch
>
>
> Class {{PermissionStatus}} is LimitedPrivate("HDFS", "MapReduce") and 
> Unstable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10662) Optimize UTF8 string/byte conversions

2016-07-20 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-10662:
---
Attachment: HDFS-10662.patch

> Optimize UTF8 string/byte conversions
> -
>
> Key: HDFS-10662
> URL: https://issues.apache.org/jira/browse/HDFS-10662
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10662.patch
>
>
> String/byte conversions may take either a Charset instance or its canonical 
> name.  One might think a Charset instance would be faster due to avoiding a 
> lookup and instantiation of a Charset, but it's not.  The canonical string 
> name variants will cache the string encoder/decoder (obtained from a Charset) 
> resulting in better performance.
> LOG4J2-935 describes a real-world performance boost.  I micro-benched a 
> marginal runtime improvement on jdk 7/8.  However for a 16 byte path, using 
> the canonical name generated 50% less garbage.  For a 64 byte path, 25% of 
> the garbage.  Given the sheer number of times that paths are (re)parsed, the 
> cost adds up quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10662) Optimize UTF8 string/byte conversions

2016-07-20 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-10662:
---
Status: Patch Available  (was: Open)

> Optimize UTF8 string/byte conversions
> -
>
> Key: HDFS-10662
> URL: https://issues.apache.org/jira/browse/HDFS-10662
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10662.patch
>
>
> String/byte conversions may take either a Charset instance or its canonical 
> name.  One might think a Charset instance would be faster due to avoiding a 
> lookup and instantiation of a Charset, but it's not.  The canonical string 
> name variants will cache the string encoder/decoder (obtained from a Charset) 
> resulting in better performance.
> LOG4J2-935 describes a real-world performance boost.  I micro-benched a 
> marginal runtime improvement on jdk 7/8.  However for a 16 byte path, using 
> the canonical name generated 50% less garbage.  For a 64 byte path, 25% of 
> the garbage.  Given the sheer number of times that paths are (re)parsed, the 
> cost adds up quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-20 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386477#comment-15386477
 ] 

Rushabh S Shah commented on HDFS-10625:
---

[~yzhangal] [~jojochuang]: any further comments on the latest patch ?

>  VolumeScanner to report why a block is found bad
> -
>
> Key: HDFS-10625
> URL: https://issues.apache.org/jira/browse/HDFS-10625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Rushabh S Shah
>  Labels: supportability
> Attachments: HDFS-10625-1.patch, HDFS-10625.patch
>
>
> VolumeScanner may report:
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> blk_1170125248_96458336 on /d/dfs/dn
> {code}
> It would be helpful to report the reason why the block is bad, especially 
> when the block is corrupt, where is the first corrupted chunk in the block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10658) Reduce JsonFactory instance allocation in StartupProgressServlet

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386547#comment-15386547
 ] 

Hadoop QA commented on HDFS-10658:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 2 unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819070/HDFS-10658.001.patch |
| JIRA Issue | HDFS-10658 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7a662c32a075 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16107/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16107/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16107/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce JsonFactory instance allocation in StartupProgressServlet
> 
>
> Key: HDFS-10658
> U

[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386538#comment-15386538
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 368 unchanged - 12 fixed = 368 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818943/HDFS-10301.011.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c29b1d2b82aa 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16106/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16106/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16106/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
>

[jira] [Updated] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-10301:
---
Status: Open  (was: Patch Available)

I am canceling patch available, because Jenkins is spinning the build all over 
again. Some bug there?

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386644#comment-15386644
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 368 unchanged - 12 fixed = 368 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} patch/hadoop-hdfs-project/hadoop-hdfs no findbugs 
output file (hadoop-hdfs-project/hadoop-hdfs/target/findbugsXml.xml) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.TestDatanodeDeath |
|   | org.apache.hadoop.hdfs.TestPread |
|   | org.apache.hadoop.hdfs.TestBlockStoragePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818943/HDFS-10301.011.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ea46cbba5d17 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16112/artifact/patchprocess/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16112/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16112/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16112/testR

[jira] [Commented] (HDFS-10658) Reduce JsonFactory instance allocation in StartupProgressServlet

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386645#comment-15386645
 ] 

Hadoop QA commented on HDFS-10658:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
18s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 36 new + 0 unchanged - 
0 fixed = 36 total (was 0) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 2 unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 7 new + 7 
unchanged - 0 fixed = 14 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819070/HDFS-10658.001.patch |
| JIRA Issue | HDFS-10658 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6d5d1d54acab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16113/artifact/patchprocess/branch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16113/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16113/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16113/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16113/artifact/patchproce

[jira] [Created] (HDFS-10664) layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION file after cluster upgrade

2016-07-20 Thread Amit Anand (JIRA)
Amit Anand created HDFS-10664:
-

 Summary: layoutVersion mismatch between Namenode VERSION file and 
Journalnode VERSION file after cluster upgrade
 Key: HDFS-10664
 URL: https://issues.apache.org/jira/browse/HDFS-10664
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, hdfs
Affects Versions: 2.7.1
Reporter: Amit Anand


After a cluster is upgraded I see a mismatch in {{layoutVersion}} between NN 
VERSION file and JN VERSION file.

Here is what I see:

Before cluster upgrade:
==
{code}
## Version file from NN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=NAME_NODE
blockpoolID=BP-786201894-10.0.100.11-1466026941507
layoutVersion=-60
{code}

{code}
## Version file from JN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=JOURNAL_NODE
layoutVersion=-60
{code}

After cluster upgrade:
=
{code}
## Version file from NN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=NAME_NODE
blockpoolID=BP-786201894-10.0.100.11-1466026941507
layoutVersion=-63
{code}

{code}
## Version file from JN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=JOURNAL_NODE
layoutVersion=-60
{code}

Since {{Namenode}} is what creates {{Journalnode}} {{VERSION}} file during 
{{initializeSharedEdits}}, it should also update the file with correct 
information after the cluster is upgrade and {{hdfs dfsadmin -finalizeUpgrade}} 
has been executed.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10664) layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION file after cluster upgrade

2016-07-20 Thread Amit Anand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Anand updated HDFS-10664:
--
Description: 
After a cluster is upgraded I see a mismatch in {{layoutVersion}} between NN 
VERSION file and JN VERSION file.

Here is what I see:

Before cluster upgrade:
==
{code}
## Version file from NN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=NAME_NODE
blockpoolID=BP-786201894-10.0.100.11-1466026941507
layoutVersion=-60
{code}

{code}
## Version file from JN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=JOURNAL_NODE
layoutVersion=-60
{code}

After cluster upgrade:
=
{code}
## Version file from NN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=NAME_NODE
blockpoolID=BP-786201894-10.0.100.11-1466026941507
layoutVersion=-63
{code}

{code}
## Version file from JN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=JOURNAL_NODE
layoutVersion=-60
{code}

Since {{Namenode}} is what creates {{Journalnode}} {{VERSION}} file during 
{{initializeSharedEdits}}, it should also update the file with correct 
information after the cluster is upgraded and {{hdfs dfsadmin 
-finalizeUpgrade}} has been executed.


  was:
After a cluster is upgraded I see a mismatch in {{layoutVersion}} between NN 
VERSION file and JN VERSION file.

Here is what I see:

Before cluster upgrade:
==
{code}
## Version file from NN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=NAME_NODE
blockpoolID=BP-786201894-10.0.100.11-1466026941507
layoutVersion=-60
{code}

{code}
## Version file from JN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=JOURNAL_NODE
layoutVersion=-60
{code}

After cluster upgrade:
=
{code}
## Version file from NN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=NAME_NODE
blockpoolID=BP-786201894-10.0.100.11-1466026941507
layoutVersion=-63
{code}

{code}
## Version file from JN current directory
namespaceID=109645726
clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
cTime=0
storageType=JOURNAL_NODE
layoutVersion=-60
{code}

Since {{Namenode}} is what creates {{Journalnode}} {{VERSION}} file during 
{{initializeSharedEdits}}, it should also update the file with correct 
information after the cluster is upgrade and {{hdfs dfsadmin -finalizeUpgrade}} 
has been executed.



> layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION 
> file after cluster upgrade
> ---
>
> Key: HDFS-10664
> URL: https://issues.apache.org/jira/browse/HDFS-10664
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs
>Affects Versions: 2.7.1
>Reporter: Amit Anand
>
> After a cluster is upgraded I see a mismatch in {{layoutVersion}} between NN 
> VERSION file and JN VERSION file.
> Here is what I see:
> Before cluster upgrade:
> ==
> {code}
> ## Version file from NN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=NAME_NODE
> blockpoolID=BP-786201894-10.0.100.11-1466026941507
> layoutVersion=-60
> {code}
> {code}
> ## Version file from JN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=JOURNAL_NODE
> layoutVersion=-60
> {code}
> After cluster upgrade:
> =
> {code}
> ## Version file from NN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=NAME_NODE
> blockpoolID=BP-786201894-10.0.100.11-1466026941507
> layoutVersion=-63
> {code}
> {code}
> ## Version file from JN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=JOURNAL_NODE
> layoutVersion=-60
> {code}
> Since {{Namenode}} is what creates {{Journalnode}} {{VERSION}} file during 
> {{initializeSharedEdits}}, it should also update the file with correct 
> information after the cluster is upgraded and {{hdfs dfsadmin 
> -finalizeUpgrade}} has been executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386656#comment-15386656
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 368 unchanged - 12 fixed = 368 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818943/HDFS-10301.011.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c1a40f43f99c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16111/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16111/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16111/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
>

[jira] [Commented] (HDFS-10287) MiniDFSCluster should implement AutoCloseable

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386686#comment-15386686
 ] 

Hadoop QA commented on HDFS-10287:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-10287 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803063/HDFS-10287.02.patch |
| JIRA Issue | HDFS-10287 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16118/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MiniDFSCluster should implement AutoCloseable
> -
>
> Key: HDFS-10287
> URL: https://issues.apache.org/jira/browse/HDFS-10287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch
>
>
> {{MiniDFSCluster}} should implement {{AutoCloseable}} in order to support 
> [try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
>  It will make test code a little cleaner and more reliable.
> Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
> backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10664) layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION file after cluster upgrade

2016-07-20 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386685#comment-15386685
 ] 

Chris Nauroth commented on HDFS-10664:
--

[~aanand001c], thank you for filing this.  I had an old note to myself to file 
a JIRA for this, which I had overlooked.

I can confirm that this does happen.  In practice, I haven't observed any 
negative side effects from this, but I agree that we should update that file 
for consistency.

> layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION 
> file after cluster upgrade
> ---
>
> Key: HDFS-10664
> URL: https://issues.apache.org/jira/browse/HDFS-10664
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs
>Affects Versions: 2.7.1
>Reporter: Amit Anand
>
> After a cluster is upgraded I see a mismatch in {{layoutVersion}} between NN 
> VERSION file and JN VERSION file.
> Here is what I see:
> Before cluster upgrade:
> ==
> {code}
> ## Version file from NN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=NAME_NODE
> blockpoolID=BP-786201894-10.0.100.11-1466026941507
> layoutVersion=-60
> {code}
> {code}
> ## Version file from JN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=JOURNAL_NODE
> layoutVersion=-60
> {code}
> After cluster upgrade:
> =
> {code}
> ## Version file from NN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=NAME_NODE
> blockpoolID=BP-786201894-10.0.100.11-1466026941507
> layoutVersion=-63
> {code}
> {code}
> ## Version file from JN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=JOURNAL_NODE
> layoutVersion=-60
> {code}
> Since {{Namenode}} is what creates {{Journalnode}} {{VERSION}} file during 
> {{initializeSharedEdits}}, it should also update the file with correct 
> information after the cluster is upgraded and {{hdfs dfsadmin 
> -finalizeUpgrade}} has been executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386697#comment-15386697
 ] 

Hadoop QA commented on HDFS-6962:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 1 new + 1153 unchanged 
- 0 fixed = 1154 total (was 1153) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
32s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 79m  
9s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819176/HDFS-6962.007.patch |
| JIRA Issue | HDFS-6962 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 61097d96ed6a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Bu

[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386702#comment-15386702
 ] 

Konstantin Shvachko commented on HDFS-10301:


My general approach to protobuf structures is to minimize changes, especially 
with redundant fields.
It is very easy to add fields, as you demonstrated, but you can never remove 
them.
So add them only if you absolutely must.
But different people can of course have different approaches.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10649) Remove unused PermissionStatus#applyUMask

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386708#comment-15386708
 ] 

Hadoop QA commented on HDFS-10649:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 48s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819177/HDFS-10649.002.patch |
| JIRA Issue | HDFS-10649 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2f175d7134df 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16114/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16114/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16114/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove unused PermissionStatus#applyUMask
> -
>
> Key: HDFS-10649
> URL: https://issues.apache.org/jira/browse/HDFS-10649
> Project: Hadoop HDFS
>  Issue Ty

[jira] [Comment Edited] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-07-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386399#comment-15386399
 ] 

John Zhuge edited comment on HDFS-6962 at 7/20/16 10:40 PM:


[~cnauroth] and [~eddyxu] Please review patch 007.

Diff from 006:
* Add CLI tests {{TestAclCLIWithPosixAclInheritance}} based on {{TestAclCLI}}
* No longer add field {{createModes}} to {{INodeWithAdditionalFields}}. 
Instead, add new feature {{CreateModesFeature}} to store create modes. In this 
way, no penalty when POSIX ACL inheritance is disable or for any inode not 
being created.
* Remove the feature {{CreateModesFeature}} once default ACL has been 
processed, so the feature only exists in inode for a short period of time.
* There is cost of adding and removing the new feature.

TODO:
* Support webhdfs


was (Author: jzhuge):
[~cnauroth] and [~eddyxu] Please review patch 007.

Diff from 006:
* Add CLI tests {{TestAclCLIWithPosixAclInheritance}} based on {{TestAclCLI}}
* No longer add field {{createModes}} to {{INodeWithAdditionalFields}}. 
Instead, add new feature {{CreateModesFeature}} to store create modes. In this 
way, no penalty when POSIX ACL inheritance is disable or for any inode not 
associated with a create request.
* Remove the feature {{CreateModesFeature}} once default ACL has been processed.
* There is added cost of adding and removing the new feature.

TODO:
* Support webhdfs

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: h

[jira] [Commented] (HDFS-10660) Expose storage policy apis via HDFSAdmin interface

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386747#comment-15386747
 ] 

Hadoop QA commented on HDFS-10660:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819165/HDFS-10660-00.patch |
| JIRA Issue | HDFS-10660 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6df596558031 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16115/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16115/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16115/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose storage policy apis via HDFSAdmin interface
> --
>
> Key: HDFS-10660
> URL: https://issues.apache.org/jira/browse/HDFS-10660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10660

[jira] [Commented] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386748#comment-15386748
 ] 

Akira Ajisaka commented on HDFS-10645:
--

Thank you for the update.
# blockReportSizes must be synchronized. I recommend to use 
{{Collections.synchronizedSortedSet(new TreeSet<>())}};
# Would you make blockReportSizes final? That way we can avoid null check in 
getMaxBlockReportSize.
# Would you use {{!blockReportSizes.isEmpty()}} instead of 
{{blockReportSizes.size() > 0}} ?
# In the regression test, would you check the followings?

* the max block report size is greater than zero
* the max data length is equal to 64MB

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-10645.001.patch, HDFS-10645.002.patch, 
> HDFS-10645.003.patch, Selection_047.png, Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10660) Expose storage policy apis via HDFSAdmin interface

2016-07-20 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386756#comment-15386756
 ] 

Rakesh R commented on HDFS-10660:
-

Test case failure is unrelated to the patch, please ignore it.

> Expose storage policy apis via HDFSAdmin interface
> --
>
> Key: HDFS-10660
> URL: https://issues.apache.org/jira/browse/HDFS-10660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10660-00.patch
>
>
> Presently, {{org.apache.hadoop.hdfs.client.HdfsAdmin.java}} interface has 
> only {{#setStoragePolicy()}} API exposed. This jira is to add the following 
> set of apis into HdfsAdmin.
> {code}
> HdfsAdmin#unsetStoragePolicy
> HdfsAdmin#getStoragePolicy
> HdfsAdmin#getAllStoragePolicies
> {code}
> Thanks [~arpitagarwal] for the offline discussions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386763#comment-15386763
 ] 

Akira Ajisaka commented on HDFS-10645:
--

{code}
// need to keep maxDataLength up-to-date, this is a configurable property.
this.maxDataLength = dn.getConf().getInt(
CommonConfigurationKeys.IPC_MAXIMUM_DATA_LENGTH,
CommonConfigurationKeys.IPC_MAXIMUM_DATA_LENGTH_DEFAULT);
{code}
The parameter is configurable but not reconfigurable, so it's sufficient to set 
only once in the constructor.

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-10645.001.patch, HDFS-10645.002.patch, 
> HDFS-10645.003.patch, Selection_047.png, Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-07-20 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386767#comment-15386767
 ] 

Lei (Eddy) Xu commented on HDFS-6962:
-

Hi, [~jzhuge] Thanks for providing the patch:

* {{FSDirectory.copyINodeDefaultACL}} should be private.
* Is it true that after creation,  the createMode is not useful for INode, as 
its mode has already been established? I feel that we do not need to store it 
as a feature in INode.
* In {{copyINodeDefaultAcl()}}. 
{code}

if (posixAclInheritanceEnabled && modes != null) {
  //
  // HDFS-6962: POSIX ACL inheritance
  //
  child.setPermission(modes.getUnmasked());
  if (!AclStorage.copyINodeDefaultAcl(child)) {
if (LOG.isDebugEnabled()) {
  LOG.debug("{}: no parent default ACL to inherit", child);
}
child.setPermission(modes.getMasked());
child.removeCreateModes();
  }
}
{code}

If the client sends the INode with {{CreateMode}}, but 
{{inherenceEnabled=false}}, the feature is not removed and thus consume more 
space? Or when {{AclStorage.copyINodeDefaultAcl()}} returns true. 

* Can we use {{FsPermission.getUnmasked() == null?}} to replace using 
{{CreateModeFeature}} related methods in INode? It can make {{INode}} interface 
unchanged.


> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10584) Allow long-running Mover tool to login with keytab

2016-07-20 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386773#comment-15386773
 ] 

Zhe Zhang commented on HDFS-10584:
--

Thanks Rakesh. The patch looks good overall (pretty much the same approach as 
in Balancer). I'll post a full review soon. Also thinking about whether it's 
possible to consolidate this logic with Balancer keytab one.

> Allow long-running Mover tool to login with keytab
> --
>
> Key: HDFS-10584
> URL: https://issues.apache.org/jira/browse/HDFS-10584
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10584-00.patch, HDFS-10584-01.patch
>
>
> The idea of this jira is to support {{mover}} tool the ability to login from 
> a keytab. That way, the RPC client would re-login from the keytab after 
> expiration, which means the process could remain authenticated indefinitely. 
> With some people wanting to run mover non-stop in "daemon mode", that might 
> be a reasonable feature to add. Recently balancer has been enhanced using 
> this feature.
> Thanks [~zhz] for the offline discussions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386782#comment-15386782
 ] 

Andrew Wang commented on HDFS-10301:


My understanding of PB is that we have a fixed 4 bits for tags, so there isn't 
really overhead to adding more PB fields as long as they are optional or 
repeated. See: https://developers.google.com/protocol-buffers/docs/encoding

Given that, I'd err on the side of readability rather than trying to reuse 
existing fields. Since block reports are a pretty infrequent operation, I 
wouldn't stress over a few bytes if we end up filling a required field with a 
dummy value. I agree with Colin that the current overloading of 
BlockListAsLongs is confusing.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10664) layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION file after cluster upgrade

2016-07-20 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386791#comment-15386791
 ] 

Arpit Agarwal commented on HDFS-10664:
--

Yeah I have come across this too and it is a harmless condition but confusing 
to administrators.

This would be a good fix to have.

> layoutVersion mismatch between Namenode VERSION file and Journalnode VERSION 
> file after cluster upgrade
> ---
>
> Key: HDFS-10664
> URL: https://issues.apache.org/jira/browse/HDFS-10664
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs
>Affects Versions: 2.7.1
>Reporter: Amit Anand
>
> After a cluster is upgraded I see a mismatch in {{layoutVersion}} between NN 
> VERSION file and JN VERSION file.
> Here is what I see:
> Before cluster upgrade:
> ==
> {code}
> ## Version file from NN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=NAME_NODE
> blockpoolID=BP-786201894-10.0.100.11-1466026941507
> layoutVersion=-60
> {code}
> {code}
> ## Version file from JN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=JOURNAL_NODE
> layoutVersion=-60
> {code}
> After cluster upgrade:
> =
> {code}
> ## Version file from NN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=NAME_NODE
> blockpoolID=BP-786201894-10.0.100.11-1466026941507
> layoutVersion=-63
> {code}
> {code}
> ## Version file from JN current directory
> namespaceID=109645726
> clusterID=CID-edcb62c5-bc1f-49f5-addb-37827340b5de
> cTime=0
> storageType=JOURNAL_NODE
> layoutVersion=-60
> {code}
> Since {{Namenode}} is what creates {{Journalnode}} {{VERSION}} file during 
> {{initializeSharedEdits}}, it should also update the file with correct 
> information after the cluster is upgraded and {{hdfs dfsadmin 
> -finalizeUpgrade}} has been executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-20 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386792#comment-15386792
 ] 

Yongjun Zhang commented on HDFS-10652:
--

Thanks a lot for the updated rev [~vinayrpet]!

I'm uploading an updated rev (003) for ease for commenting:

1, In the patch, we inject an error when detecting the condition (ackSize % 512 
> 0 && ackSize < diskSize) at the second DN, the third DN is thrown away and we 
add a new DN. In pipeline recovery, when we do block transfer to the new DN, 
the copy source may be the first or the second DN. It's not deterministic in 
this test. I did quite a few rounds of test, and saw that using either one as a 
source seems fine, it demonstrates the same ackSize and diskSize satisfying 
(ackSize % 512 > 0 && ackSize < diskSize), so this is good. However, I'd like 
to point this out, since in the real case, the copy source is the DN that 
satisfies the above mentioned condition.

2. I replaced the hardcoded constants with constants, as 
{code}
final int CHUNK_SIZE = 512;
final int ONE_WRITE_SIZE = 5000;
final int TOTAL_SIZE = 2 * 1024 * 1024;
final int ERROR_INJECTION_LOC = TOTAL_SIZE / 2;
{code}
I thought the {{TOTAL_SIZE}} doesn't have to be 1M, and  
{{ERROR_INJECTION_LOC}} doesn't have to be half the total size, so I did the 
following change
{code}
final int CHUNK_SIZE = 512;
final int ONE_WRITE_SIZE = 5000;
final int TOTAL_SIZE = 1024 * 1024;
final int ERROR_INJECTION_LOC = 512;
{code}
and that does work too.

Would you please take a look? 

Thanks.




> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Vinayakumar B
> Attachments: HDFS-10652-002.patch, HDFS-10652.001.patch, 
> HDFS-10652.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-20 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10652:
-
Attachment: HDFS-10652.003.patch

> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Vinayakumar B
> Attachments: HDFS-10652-002.patch, HDFS-10652.001.patch, 
> HDFS-10652.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-20 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10652:
-
Attachment: (was: HDFS-10652.003.patch)

> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Vinayakumar B
> Attachments: HDFS-10652-002.patch, HDFS-10652.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-20 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10652:
-
Attachment: HDFS-10652.003.patch

> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Vinayakumar B
> Attachments: HDFS-10652-002.patch, HDFS-10652.001.patch, 
> HDFS-10652.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10652) Add a unit test for HDFS-4660

2016-07-20 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386792#comment-15386792
 ] 

Yongjun Zhang edited comment on HDFS-10652 at 7/20/16 11:29 PM:


Thanks a lot for the updated rev [~vinayrpet]!

I'm uploading an updated rev (003) for ease for commenting:

1, In the patch, we inject an error when detecting the condition (ackSize % 512 
> 0 && ackSize < diskSize) at the second DN, the third DN is thrown away and we 
add a new DN. In pipeline recovery, when we do block transfer to the new DN, 
the copy source may be the first or the second DN. It's not deterministic in 
this test. I did quite a few rounds of test, and saw that using either one as a 
source seems fine, both nodes have the same ackSize and diskSize satisfying 
(ackSize % 512 > 0 && ackSize < diskSize), so this is good. 

However, I'd like to point this out, since in the real case that we examined, 
the copy source is the DN that satisfies the above mentioned condition. 

2. I replaced the hardcoded constants with constants, as 
{code}
final int CHUNK_SIZE = 512;
final int ONE_WRITE_SIZE = 5000;
final int TOTAL_SIZE = 2 * 1024 * 1024;
final int ERROR_INJECTION_LOC = TOTAL_SIZE / 2;
{code}
I thought the {{TOTAL_SIZE}} doesn't have to be 1M, and  
{{ERROR_INJECTION_LOC}} doesn't have to be half the total size, so I did the 
following change
{code}
final int CHUNK_SIZE = 512;
final int ONE_WRITE_SIZE = 5000;
final int TOTAL_SIZE = 1024 * 1024;
final int ERROR_INJECTION_LOC = 512;
{code}
and that does work too.

Would you please take a look? 

Thanks.





was (Author: yzhangal):
Thanks a lot for the updated rev [~vinayrpet]!

I'm uploading an updated rev (003) for ease for commenting:

1, In the patch, we inject an error when detecting the condition (ackSize % 512 
> 0 && ackSize < diskSize) at the second DN, the third DN is thrown away and we 
add a new DN. In pipeline recovery, when we do block transfer to the new DN, 
the copy source may be the first or the second DN. It's not deterministic in 
this test. I did quite a few rounds of test, and saw that using either one as a 
source seems fine, it demonstrates the same ackSize and diskSize satisfying 
(ackSize % 512 > 0 && ackSize < diskSize), so this is good. However, I'd like 
to point this out, since in the real case, the copy source is the DN that 
satisfies the above mentioned condition.

2. I replaced the hardcoded constants with constants, as 
{code}
final int CHUNK_SIZE = 512;
final int ONE_WRITE_SIZE = 5000;
final int TOTAL_SIZE = 2 * 1024 * 1024;
final int ERROR_INJECTION_LOC = TOTAL_SIZE / 2;
{code}
I thought the {{TOTAL_SIZE}} doesn't have to be 1M, and  
{{ERROR_INJECTION_LOC}} doesn't have to be half the total size, so I did the 
following change
{code}
final int CHUNK_SIZE = 512;
final int ONE_WRITE_SIZE = 5000;
final int TOTAL_SIZE = 1024 * 1024;
final int ERROR_INJECTION_LOC = 512;
{code}
and that does work too.

Would you please take a look? 

Thanks.




> Add a unit test for HDFS-4660
> -
>
> Key: HDFS-10652
> URL: https://issues.apache.org/jira/browse/HDFS-10652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Vinayakumar B
> Attachments: HDFS-10652-002.patch, HDFS-10652.001.patch, 
> HDFS-10652.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-07-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386804#comment-15386804
 ] 

Andrew Wang commented on HDFS-10519:


Thanks for revving Jiayi. Overall looks good, mostly just nits now. I think the 
next rev should do it.

* in DFSConfigKeys, let's group the new config key with the other 
"dfs.ha.tail-edits" keys
* Regarding {{isBoundedByDurableTxId}}, maybe shorten to {{onlyDurableTxns}}? 
Should update javadoc for JournalSet#selectInputStreams also to talk about the 
boolean. Would be good to explain what a "durable TxId" means also, and the 
tightness of the bound (it can be conservative).
* Nit: in QuorumJournalManager, we already assigned {{remoteLog.getEditTxId()}} 
to endTxId. So we can re-use {{endTxId}} when doing Math.min.
* QuorumOutputStream, recommend we add the new boolean at the end of the list 
of parameters, we generally try to put flags/options at the end
* RemoteEditLogManifest, the manifest might not have any logs, then the 
{{logs.get(0)}} check will fail.

I would like to think about those randomized tests more, but yea that can 
happen in another JIRA.

> Add a configuration option to enable in-progress edit log tailing
> -
>
> Key: HDFS-10519
> URL: https://issues.apache.org/jira/browse/HDFS-10519
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10519.001.patch, HDFS-10519.002.patch, 
> HDFS-10519.003.patch, HDFS-10519.004.patch, HDFS-10519.005.patch, 
> HDFS-10519.006.patch, HDFS-10519.007.patch, HDFS-10519.008.patch
>
>
> Standby Namenode has the option to do in-progress edit log tailing to improve 
> the data freshness. In-progress tailing is already implemented, but it's not 
> enabled as default configuration. And there's no related configuration key to 
> turn it on.
> Adding a related configuration key to let Standby Namenode is reasonable and 
> would be a basis for further improvement on Standby Namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10665) Provide a way to add a new Journalnode to an existing quorum

2016-07-20 Thread Amit Anand (JIRA)
Amit Anand created HDFS-10665:
-

 Summary: Provide a way to add a new Journalnode to an existing 
quorum
 Key: HDFS-10665
 URL: https://issues.apache.org/jira/browse/HDFS-10665
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: ha, hdfs, journal-node
Reporter: Amit Anand


In current implementation of {{HDFS}} {{HA}} using {{QJOURNAL}} there is no way 
to add a new {{Journalnode(JN)}} to an existing {{JN}} quorum or reinstall a 
failed {{JN}} machine.

The current process to populate {{JN}} directories is:
* Start {{JN}} daemons on multiple machines (usually an odd number 3 or 5)
* Shutdown {{Namenode}}
* Issue {{hdfs namenode -initializeSharedEdits}} - This will populate {{JN}}

After {{JN}} are populated; if a machine, after hardware failure, is 
reinstalled or a new set of machines are added to expand the {{JN}} quorum the 
new {{JN}} machines will not be populated by {{NameNode}} without following the 
current process that is described above. 

The current process causes downtime on a 24x7 operation cluster if {{JN}} needs 
any maintenance. 

Although, one can follow steps given below to work around the issue described 
above:
1. Install a new {{JN}} or reinstall an existing {{JN}} machine.
2. Created the required {{JN}} directory structure
3. Copy {{VERSION}} file from an existing {{JN}} to {{JN's}} {{current}} 
directory
4. Manually create {{paxos}} directory under {{JN's}} {{current}} directory
5. Start the {{JN}} daemon.
6. Add new set of {{JNs}} to {{hdfs-site.xml}} and restart {{NN}}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10659) Namenode crashes after Journalnode re-installation in an HA cluster due to missing paxos directory

2016-07-20 Thread Amit Anand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Anand updated HDFS-10659:
--
Description: 
In my environment I am seeing {{Namenodes}} crashing down after majority of 
{{Journalnodes}} are re-installed. We manage multiple clusters and do rolling 
upgrades followed by rolling re-install of each node including master(NN, JN, 
RM, ZK) nodes. When a journal node is re-installed or moved to a new disk/host, 
instead of running {{"initializeSharedEdits"}} command, I copy {{VERSION}} file 
from one of the other {{Journalnode}} and that allows my {{NN}} to start 
writing data to the newly installed {{Journalnode}}.

To acheive quorum for JN and recover unfinalized segments NN during starupt 
creates .tmp files under {{"/jn/current/paxos"}} directory . In 
current implementation "paxos" directry is only created during 
{{"initializeSharedEdits"}} command and if a JN is re-installed the "paxos" 
directory is not created upon JN startup or by NN while writing .tmp files 
which causes NN to crash with following error message:

{code}
192.168.100.16:8485: /disk/1/dfs/jn/Test-Laptop/current/paxos/64044.tmp (No 
such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at 
org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.persistPaxosData(Journal.java:971)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:846)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:205)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.acceptRecovery(QJournalProtocolServerSideTranslatorPB.java:249)
at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25435)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
{code}

The current 
[getPaxosFile|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java#L128-L130]
 method simply returns a path to a file under "paxos" directory without 
verifiying its existence. Since "paxos" directoy holds files that are required 
for NN recovery and acheiving JN quorum my proposed solution is to add a check 
to "getPaxosFile" method and create the {{"paxos"}} directory if it is missing.

  was:
In my environment I am seeing {{Namenodes}} crashing down after 
{{Journalnodes}} are re-installed. We manage multiple clusters and do rolling 
upgrades followed by rolling re-install of each node including master(NN, JN, 
RM, ZK) nodes. When a journal node is re-installed or moved to a new disk/host, 
instead of running {{"initializeSharedEdits"}} command, I copy {{VERSION}} file 
from one of the other {{Journalnode}} and that allows my {{NN}} to start 
writing data to the newly installed {{Journalnode}}.

To acheive quorum for JN and recover unfinalized segments NN during starupt 
creates .tmp files under {{"/jn/current/paxos"}} directory . In 
current implementation "paxos" directry is only created during 
{{"initializeSharedEdits"}} command and if a JN is re-installed the "paxos" 
directory is not created upon JN startup or by NN while writing .tmp files 
which causes NN to crash with following error message:

{code}
192.168.100.16:8485: /disk/1/dfs/jn/Test-Laptop/current/paxos/64044.tmp (No 
such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at 
org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.persistPaxosData(Journal.java:971)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:846)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:205)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSide

[jira] [Commented] (HDFS-10654) Move building of httpfs dependency analysis under "docs" profile

2016-07-20 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386843#comment-15386843
 ] 

Aaron T. Myers commented on HDFS-10654:
---

Patch looks good to me. +1 pending Jenkins.

Thanks, Andrew.

> Move building of httpfs dependency analysis under "docs" profile
> 
>
> Key: HDFS-10654
> URL: https://issues.apache.org/jira/browse/HDFS-10654
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, httpfs
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-10654.001.patch
>
>
> When built with "-Pdist" but not "-Pdocs", httpfs still generates a 
> share/docs directory since the dependency report is run unconditionally. 
> Let's move it under the "docs" profile like the rest of the site.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-07-20 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386845#comment-15386845
 ] 

Konstantin Shvachko commented on HDFS-10301:


As I commented earlier I am not in favor of adding redundant fields. The 
readability argument is also quite questionable, because you end up either 
filling storage information in two fields, or sending it in different fields 
for different types of block report messages.
In more details:
- Suppose we introduced {{repeated String allStorageIds}}.
- In full report (which is not split into multiple RPCs) we already have all 
storage ids listed in StorageBlockReports. And we don't need {{allStorageIds}}. 
If we nevertheless fill {{allStorageIds}} it will be confusing.
- In a report that is split into multiple RPCs we fill {{allStorageIds}}, 
because only one storage is reported. So in this case we will use a different 
field to pass storageIDs.
- I think code is more _readable_ when the same information is passed via the 
same fields, and is not duplicated.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10659) Namenode crashes after Journalnode re-installation in an HA cluster due to missing paxos directory

2016-07-20 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386863#comment-15386863
 ] 

Jing Zhao commented on HDFS-10659:
--

I think we do not need to manually recreate the "current" directory or copy the 
version file here. After restarting JN1, and before shutting down JN2, try 
rolling the editlog segment (dfsadmin -rollEdits). In this way, every JN will 
have a new segment and JN1 will work fine in the protocol. Then shutting down 
JN2 should be fine.

> Namenode crashes after Journalnode re-installation in an HA cluster due to 
> missing paxos directory
> --
>
> Key: HDFS-10659
> URL: https://issues.apache.org/jira/browse/HDFS-10659
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, journal-node
>Affects Versions: 2.7.1
>Reporter: Amit Anand
>
> In my environment I am seeing {{Namenodes}} crashing down after majority of 
> {{Journalnodes}} are re-installed. We manage multiple clusters and do rolling 
> upgrades followed by rolling re-install of each node including master(NN, JN, 
> RM, ZK) nodes. When a journal node is re-installed or moved to a new 
> disk/host, instead of running {{"initializeSharedEdits"}} command, I copy 
> {{VERSION}} file from one of the other {{Journalnode}} and that allows my 
> {{NN}} to start writing data to the newly installed {{Journalnode}}.
> To acheive quorum for JN and recover unfinalized segments NN during starupt 
> creates .tmp files under {{"/jn/current/paxos"}} directory . In 
> current implementation "paxos" directry is only created during 
> {{"initializeSharedEdits"}} command and if a JN is re-installed the "paxos" 
> directory is not created upon JN startup or by NN while writing .tmp 
> files which causes NN to crash with following error message:
> {code}
> 192.168.100.16:8485: /disk/1/dfs/jn/Test-Laptop/current/paxos/64044.tmp (No 
> such file or directory)
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:221)
> at java.io.FileOutputStream.(FileOutputStream.java:171)
> at 
> org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.persistPaxosData(Journal.java:971)
> at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:846)
> at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:205)
> at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.acceptRecovery(QJournalProtocolServerSideTranslatorPB.java:249)
> at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25435)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
> {code}
> The current 
> [getPaxosFile|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java#L128-L130]
>  method simply returns a path to a file under "paxos" directory without 
> verifiying its existence. Since "paxos" directoy holds files that are 
> required for NN recovery and acheiving JN quorum my proposed solution is to 
> add a check to "getPaxosFile" method and create the {{"paxos"}} directory if 
> it is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >