[jira] [Commented] (HDFS-7661) Erasure coding: support hflush and hsync

2016-01-05 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085157#comment-15085157
 ] 

GAO Rui commented on HDFS-7661:
---

Based on GS, when the 2nd flush invoked, and datanode failure happened, if the 
new internal block(internal block with new GS) count is less than needed, the 
writing process should be judged as failed. Error message should be printed to 
the user I think.

> Erasure coding: support hflush and hsync
> 
>
> Key: HDFS-7661
> URL: https://issues.apache.org/jira/browse/HDFS-7661
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: GAO Rui
> Attachments: EC-file-flush-and-sync-steps-plan-2015-12-01.png, 
> HDFS-7661-unitTest-wip-trunk.patch, 
> HDFS-EC-file-flush-sync-design-version1.1.pdf
>
>
> We also need to support hflush/hsync and visible length. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7661) Erasure coding: support hflush and hsync

2016-01-05 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085134#comment-15085134
 ] 

GAO Rui commented on HDFS-7661:
---

[~zhz], these make sense to me. Let's store data length in {{.meta}} file. For 
replication block .meta file, data length means the length of the related 
block. While, for EC internal block .meta file, data length means the related 
block group data length. 

> Erasure coding: support hflush and hsync
> 
>
> Key: HDFS-7661
> URL: https://issues.apache.org/jira/browse/HDFS-7661
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: GAO Rui
> Attachments: EC-file-flush-and-sync-steps-plan-2015-12-01.png, 
> HDFS-7661-unitTest-wip-trunk.patch, 
> HDFS-EC-file-flush-sync-design-version1.1.pdf
>
>
> We also need to support hflush/hsync and visible length. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9005) Provide support for upgrade domain script

2016-01-05 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9005:
--
Attachment: HDFS-9005.patch

[~ctrezzo] will create a new jira for the design around "passing datanode 
admins properties via ClientProtocol" approach. Meanwhile, we should go ahead 
and use the existing refreshNodes flow to provide upgrade domain support; also 
some of the components such as json and datanode admin data structure can be 
reused later in the long-term design. Here is the draft patch that allows 
admins to specify all DN properties in one json file.  Example:

{noformat}
 {"hostName": "host1"}
 {"hostName": "host2", "port": 50, "upgradeDomain": "ud0"}
 {"hostName": "host3", "port": 0, "adminState": "DECOMMISSIONED"}
{noformat}


* Abstract host file manager into an interface for both the new json approach 
and the old include/exclude approach to implement.
* Implement json file manager. The default is still the old include/exclude 
approach.
* Have {{DatanodeManager}} get the upgrade domain from the host file manager.
* Refactor test code w.r.t. host file writing.

> Provide support for upgrade domain script
> -
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
> Attachments: HDFS-9005.patch
>
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9005) Provide support for upgrade domain script

2016-01-05 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9005:
--
Assignee: Ming Ma
  Status: Patch Available  (was: Open)

> Provide support for upgrade domain script
> -
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9005.patch
>
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9605) Add links to failed volumes to explorer.html in HDFS Web UI

2016-01-05 Thread Archana T (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084960#comment-15084960
 ] 

Archana T commented on HDFS-9605:
-

Hi [~wheat9]
Thanks for the commit.

> Add links to failed volumes to explorer.html in HDFS Web UI
> ---
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8356) Document missing properties in hdfs-default.xml

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084560#comment-15084560
 ] 

Hadoop QA commented on HDFS-8356:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 24s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 50m 33s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 139m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780643/HDFS-8356.004.patch |
| JIRA Issue | HDFS-8356 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 1c9ea1d1183a 3.13.0-36

[jira] [Commented] (HDFS-8572) DN always uses HTTP/localhost@REALM principals in SPNEGO

2016-01-05 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084557#comment-15084557
 ] 

Brahma Reddy Battula commented on HDFS-8572:


[~wheat9] can I raise separate jira to track this..?

> DN always uses HTTP/localhost@REALM principals in SPNEGO
> 
>
> Key: HDFS-8572
> URL: https://issues.apache.org/jira/browse/HDFS-8572
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 2.7.1
>
> Attachments: HDFS-8572.000.patch, HDFS-8572.001.patch, 
> HDFS-8572.002.patch
>
>
> In HDFS-7279 the Netty server in DN proxies all servlet requests to the local 
> Jetty instance.
> The Jetty server is configured incorrectly so that it always uses 
> {{HTTP/locahost@REALM}} to authenticate spnego requests. As a result, 
> servlets like JMX are no longer accessible in secure deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7661) Erasure coding: support hflush and hsync

2016-01-05 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084527#comment-15084527
 ] 

Walter Su commented on HDFS-7661:
-

According to the description, 
1. 3 parity blocks should be updated in sequential.
2. The 2nd flush decreases data safety before the 1st flush. If there's already 
numParityBlks failures, the 2nd flush must succeed,  even cannot be aborted by 
user. Otherwise it'll damage the data before 1st flush.

> Erasure coding: support hflush and hsync
> 
>
> Key: HDFS-7661
> URL: https://issues.apache.org/jira/browse/HDFS-7661
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: GAO Rui
> Attachments: EC-file-flush-and-sync-steps-plan-2015-12-01.png, 
> HDFS-7661-unitTest-wip-trunk.patch, 
> HDFS-EC-file-flush-sync-design-version1.1.pdf
>
>
> We also need to support hflush/hsync and visible length. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084480#comment-15084480
 ] 

Hadoop QA commented on HDFS-9498:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 2 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 354, now 354). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 56s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 51m 44s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 130m 40s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780636/HDFS-9498.004.patch |
| JIRA Issue | HDFS-9498 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 490a319bccb5 3.13.0-36-lowlatency #63-Ubuntu SMP

[jira] [Commented] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084436#comment-15084436
 ] 

Hadoop QA commented on HDFS-9525:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
39s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 8s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 25s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
8s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 26s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 13s 
{color} | {color:red} hadoop-common-project/hadoop-common in branch-2 has 5 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 59s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2 has 5 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 27s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s 
{color} | {color:red} Patch generated 1 new checkstyle issues in root (total 
was 345, now 346). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 61 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 21s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 15s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 0s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} ha

[jira] [Commented] (HDFS-9613) Some improvement and clean up in distcp

2016-01-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084435#comment-15084435
 ] 

Kai Zheng commented on HDFS-9613:
-

Thanks [~jingzhao] for the good questions!

bq. I'm not sure if this is correct if the source/target filesystems are not 
DistributedFileSystem
I looked at the discussion in HADOOP-3981 and checked existing codes, the 
{{getFileChecksum}} seems only implemented in HDFS. For other kind of 
source/target file systems, we may not preserve the checksum opt and block size 
setting because they're only specific to HDFS, I thought that's why the 
behaviour isn't by default and an additional {{-pb}} is provided. In such case, 
the {{preserve}} variable would be false, and skipping the checksum comparing 
would make sense.

I also found the following codes in {{CopyMapper}}, which may tell something.
{code}
  private boolean canSkip(FileSystem sourceFS, FileStatus source, 
  FileStatus target) throws IOException {
if (!syncFolders) {
  return true;
}
boolean sameLength = target.getLen() == source.getLen();
boolean sameBlockSize = source.getBlockSize() == target.getBlockSize()
|| !preserve.contains(FileAttribute.BLOCKSIZE);
if (sameLength && sameBlockSize) {
  return skipCrc ||
  DistCpUtils.checksumsAreEqual(sourceFS, source.getPath(), null,
  targetFS, target.getPath());
} else {
  return false;
}
  }
{code}

bq. or if we use a new file checksum computation algorithm (e.g., HDFS-8430) 
which does not require the same block size.
Yeah, you're right. For some other block layout like striping, we may achieve 
the effect that two files can compare checksum even they use different block 
size. Like discussed in HDFS-8430, we may revisit here once we have determined 
the approach there.

Sounds good? Thanks.


> Some improvement and clean up in distcp
> ---
>
> Key: HDFS-9613
> URL: https://issues.apache.org/jira/browse/HDFS-9613
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HDFS-9613-v1.patch, HDFS-9613-v2.patch
>
>
> While working on related issue, it was noticed there are some places in 
> {{distcp}} that's better to be improved and cleaned up. Particularly, after a 
> file is coped to target cluster, it will check the copied file is fine or 
> not. When checking, better to check block size first, then the checksum, 
> because the later is a little expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3599) Better expose when under-construction files are preventing DN decommission

2016-01-05 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084422#comment-15084422
 ] 

Zhe Zhang commented on HDFS-3599:
-

With the HDFS-5579 change, it's still possible for UC files to block decomm, if 
the {{minReplication}} is configured to be larger than 1. In that case it's 
still possible for the last block in an UC file to be under-replicated. NN 
won't try to re-replicate it and it will block decomm.

Another issue is that HDFS-7411 removed the below logic (introduced by 
HDFS-5579):
{code}
  if (block.equals(bc.getLastBlock()) && curReplicas > 
minReplication) {
continue;
  }
{code}

Pinging [~andrew.wang] to confirm whether we should add it back.

> Better expose when under-construction files are preventing DN decommission
> --
>
> Key: HDFS-3599
> URL: https://issues.apache.org/jira/browse/HDFS-3599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Zhe Zhang
>
> Filing on behalf of Konstantin Olchanski:
> {quote}
> I have been trying to decommission a data node, but the process
> stalled. I followed the correct instructions, observed my node
> listed in "Decommissioning Nodes", etc, observed "Under Replicated Blocks"
> decrease, etc. But the count went down to "1" and the decommissin process 
> stalled.
> There was no visible activity anywhere, nothing was happening (well,
> maybe in some hidden log file somewhere something complained,
> but I did not look).
> It turns out that I had some files stuck in "OPENFORWRITE" mode,
> as reported by "hdfs fsck / -openforwrite -files -blocks -locations -racks":
> {code}
> /users/trinat/data/.fuse_hidden177e0002 0 bytes, 0 block(s), 
> OPENFORWRITE:  OK
> /users/trinat/data/.fuse_hidden178d0003 0 bytes, 0 block(s), 
> OPENFORWRITE:  OK
> /users/trinat/data/.fuse_hidden1da30004 0 bytes, 1 block(s), 
> OPENFORWRITE:  OK
> 0. 
> BP-88378204-142.90.119.126-1340494203431:blk_6980480609696383665_20259{blockUCState=UNDER_CONSTRUCTION,
>  primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[142.90.111.72:50010|RBW], 
> ReplicaUnderConstruction[142.90.119.162:50010|RBW], 
> ReplicaUnderConstruction[142.90.119.126:50010|RBW]]} len=0 repl=3 
> [/detfac/142.90.111.72:50010, /isac2/142.90.119.162:50010, 
> /isac2/142.90.119.126:50010]
> {code}
> After I deleted those files, the decommission process completed successfully.
> Perhaps one can add some visible indication somewhere on the HDFS status web 
> page
> that the decommission process is stalled and maybe report why it is stalled?
> Maybe the number of "OPENFORWRITE" files should be listed on the status page
> next to the "Number of Under-Replicated Blocks"? (Since I know that nobody is 
> writing
> to my HDFS, the non-zero count would give me a clue that something is wrong).
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.

2016-01-05 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084414#comment-15084414
 ] 

Jing Zhao commented on HDFS-8999:
-

Another question is whether we should update the new logic from HDFS-1172 and 
HDFS-9535. In HDFS-1172, we add the replica to the pending queue only when 
there are already >=1 live replicas. Now since the client can close the file 
without waiting for the IBR, we will add the block into the UnderReplicated 
queue with {{QUEUE_WITH_CORRUPT_BLOCKS}} priority. I think maybe we should 
still put the block into the pending queue in this scenario.

> Namenode need not wait for {{blockReceived}} for the last block before 
> completing a file.
> -
>
> Key: HDFS-8999
> URL: https://issues.apache.org/jira/browse/HDFS-8999
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8999_20151228.patch, h8999_20160106.patch
>
>
> This comes out of a discussion in HDFS-8763. Pasting [~jingzhao]'s comment 
> from the jira:
> {quote}
> ...whether we need to let NameNode wait for all the block_received msgs to 
> announce the replica is safe. Looking into the code, now we have
># NameNode knows the DataNodes involved when initially setting up the 
> writing pipeline
># If any DataNode fails during the writing, client bumps the GS and 
> finally reports all the DataNodes included in the new pipeline to NameNode 
> through the updatePipeline RPC.
># When the client received the ack for the last packet of the block (and 
> before the client tries to close the file on NameNode), the replica has been 
> finalized in all the DataNodes.
> Then in this case, when NameNode receives the close request from the client, 
> the NameNode already knows the latest replicas for the block. Currently the 
> checkReplication call only counts in all the replicas that NN has already 
> received the block_received msg, but based on the above #2 and #3, it may be 
> safe to also count in all the replicas in the 
> BlockUnderConstructionFeature#replicas?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8356) Document missing properties in hdfs-default.xml

2016-01-05 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HDFS-8356:
-
Attachment: HDFS-8356.004.patch

Fix whitespace.

> Document missing properties in hdfs-default.xml
> ---
>
> Key: HDFS-8356
> URL: https://issues.apache.org/jira/browse/HDFS-8356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability, test
> Attachments: HDFS-8356.001.patch, HDFS-8356.002.patch, 
> HDFS-8356.003.patch, HDFS-8356.004.patch
>
>
> The following properties are currently not defined in hdfs-default.xml. These 
> properties should either be
> A) documented in hdfs-default.xml OR
> B) listed as an exception (with comments, e.g. for internal use) in the 
> TestHdfsConfigFields unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.

2016-01-05 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084058#comment-15084058
 ] 

Jing Zhao commented on HDFS-8999:
-

Thanks for the patch, [~szetszwo]! The patch looks good to me. Comments and 
question:
# AccessControlException is for security, thus instead of extending it, it's 
better to let LastBlockNotYetCompleteException extend IOException, or even 
RetriableException if we want to enable retry for client?
# Should we be more aggressive and allow all the blocks to be in committed 
state? Otherwise we will still have issue when IBRs are sent periodically.

> Namenode need not wait for {{blockReceived}} for the last block before 
> completing a file.
> -
>
> Key: HDFS-8999
> URL: https://issues.apache.org/jira/browse/HDFS-8999
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8999_20151228.patch, h8999_20160106.patch
>
>
> This comes out of a discussion in HDFS-8763. Pasting [~jingzhao]'s comment 
> from the jira:
> {quote}
> ...whether we need to let NameNode wait for all the block_received msgs to 
> announce the replica is safe. Looking into the code, now we have
># NameNode knows the DataNodes involved when initially setting up the 
> writing pipeline
># If any DataNode fails during the writing, client bumps the GS and 
> finally reports all the DataNodes included in the new pipeline to NameNode 
> through the updatePipeline RPC.
># When the client received the ack for the last packet of the block (and 
> before the client tries to close the file on NameNode), the replica has been 
> finalized in all the DataNodes.
> Then in this case, when NameNode receives the close request from the client, 
> the NameNode already knows the latest replicas for the block. Currently the 
> checkReplication call only counts in all the replicas that NN has already 
> received the block_received msg, but based on the above #2 and #3, it may be 
> safe to also count in all the replicas in the 
> BlockUnderConstructionFeature#replicas?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8356) Document missing properties in hdfs-default.xml

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084023#comment-15084023
 ] 

Hadoop QA commented on HDFS-8356:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 1s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 19s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 155m 14s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestRollingUpgrade |
| JDK v1.7.0_91 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780615/HDFS-8356.003.patch |
| JIRA Issue | HDFS-8356 |
| Optional Tests |  asflicense  compile  javac 

[jira] [Created] (HDFS-9616) libhdfs++ Add runtime hooks to allow a client application to add low level monitoring and tests.

2016-01-05 Thread James Clampffer (JIRA)
James Clampffer created HDFS-9616:
-

 Summary: libhdfs++ Add runtime hooks to allow a client application 
to add low level monitoring and tests.
 Key: HDFS-9616
 URL: https://issues.apache.org/jira/browse/HDFS-9616
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


It would be nice to have a set of callable objects and corresponding event 
hooks in useful places that can be set by a client application at runtime.  

This is intended to provide a scalable mechanism for implementing counters 
(#retries, #namenode requests) or application specific testing e.g. simulate a 
dropped connection when the test system running the client application requests.

Current implementation plan is a struct full of callbacks (std::functions) 
owned by the FileSystemImpl.  A callback could be set (or left as a no-op) and 
when the code hits the corresponding event it will be invoked with a reference 
to the object (for context) and each method argument by reference.  The 
callback returns a bool: true to continue execution or false to bail out of the 
calling method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9615) Fix variable name typo in DFSConfigKeys#DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084020#comment-15084020
 ] 

Hadoop QA commented on HDFS-9615:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s 
{color} | {color:red} Patch generated 2 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 617, now 617). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 2s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 4s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 133m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestHDFSFileSystemContract |
| JDK v1.7.0_91 Failed junit tests | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.hdfs.TestHDFSFileSystemContrac

[jira] [Commented] (HDFS-7661) Erasure coding: support hflush and hsync

2016-01-05 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084008#comment-15084008
 ] 

Zhe Zhang commented on HDFS-7661:
-

bq. 2. flush at Flush point 2.1.1, PIB1 and PIB2 were updated to PIB1` and PIB 
2`, but *PIB3 failed in updating*.
After this failure, following the current error handling logic, we should have 
bumped the GS of PIB1' and PIB2'. In other words PIB3 should have a stale GS 
now (and NN should be able to remove it from the block's locations).

bq. 3. PIB1`, the first and third data internal block were down.
So in this example there are 4 failures: 2 internal data blocks, PIB1', and 
PIB3. Data is not recoverable. If only 1 internal data block failed, the reader 
should be able to use PIB2' for decoding.

Actually even without the help of GS, we should be able to detect stale parity 
blocks from the length info in the {{.meta}} file. When decoding a partial 
stripe 64KB+64KB+10KB, there are 2 scenarios:
# The last cell (which should be the only partial cell) is available. Then we 
should make sure only use parity blocks with length 138KB in the {{.meta}} file.
# The last cell is unavailable. In this case, we should just make sure all 
parity blocks have the same length in the {{.meta}} file.

> Erasure coding: support hflush and hsync
> 
>
> Key: HDFS-7661
> URL: https://issues.apache.org/jira/browse/HDFS-7661
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: GAO Rui
> Attachments: EC-file-flush-and-sync-steps-plan-2015-12-01.png, 
> HDFS-7661-unitTest-wip-trunk.patch, 
> HDFS-EC-file-flush-sync-design-version1.1.pdf
>
>
> We also need to support hflush/hsync and visible length. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084009#comment-15084009
 ] 

Haohui Mai commented on HDFS-9047:
--

[~cmccabe], does it look good to you?

I plan to commit ti tomorrow if there is no more comments.



> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>Assignee: Haohui Mai
> Attachments: HDFS-9047.000.patch
>
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084004#comment-15084004
 ] 

Arpit Agarwal commented on HDFS-9498:
-

Thanks for the updated patch [~liuml07]. +1 from me. I will commit it by EOD 
tomorrow if [~anu] has no additional comments.

> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch, HDFS-9498.004.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.
> Leaving safe mode checks blocks with future GS in {{FSNamesystem}}. This code 
> can also be moved to {{BlockManagerSafeMode}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9498:

Attachment: HDFS-9498.004.patch

The v4 patch addresses [~arpitagarwal]'s comments. Specially,
  * replace {{orphan blocks}} confusing terminology with 
{{blocksWithFutureGenerationStamps}}
  * asserting {{hasWriteLock}} in {{startSecretManagerIfNecessary()}} as the 
{{running}} and {{startThreads}} should be synchronized correctly

There is no logic changes from v3 patch.

> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch, HDFS-9498.004.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.
> Leaving safe mode checks blocks with future GS in {{FSNamesystem}}. This code 
> can also be moved to {{BlockManagerSafeMode}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083965#comment-15083965
 ] 

Arpit Agarwal commented on HDFS-9498:
-

Okay that makes sense. I missed the side effect of moving the 
setManualAndResourceLowSafeMode call.

I wish there was a more straightforward way to do it. The extra call to 
startSecretManagerIfNecessary looks out of place but your approach is the 
easiest fix for now. +1 with the {{orphanBlocks}} terminology fixed.

[~anu], does the v3 patch look okay to you?

> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.
> Leaving safe mode checks blocks with future GS in {{FSNamesystem}}. This code 
> can also be moved to {{BlockManagerSafeMode}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9613) Some improvement and clean up in distcp

2016-01-05 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083958#comment-15083958
 ] 

Jing Zhao commented on HDFS-9613:
-

Thanks for the improvement, Kai. One question about the patch:
{code}
  /**
   * Only when checksum opt and block size are preserved while copying, do 
the
   * file checksums comparing, to avoid unnecessary checksum computing for
   * better performance.
   */
{code}

I'm not sure if this is correct if the source/target filesystems are not 
DistributedFileSystem, or if we use a new file checksum computation algorithm 
(e.g., HDFS-8430) which does not require the same block size.

> Some improvement and clean up in distcp
> ---
>
> Key: HDFS-9613
> URL: https://issues.apache.org/jira/browse/HDFS-9613
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HDFS-9613-v1.patch, HDFS-9613-v2.patch
>
>
> While working on related issue, it was noticed there are some places in 
> {{distcp}} that's better to be improved and cleaned up. Particularly, after a 
> file is coped to target cluster, it will check the copied file is fine or 
> not. When checking, better to check block size first, then the checksum, 
> because the later is a little expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9605) Add links to failed volumes to explorer.html in HDFS Web UI

2016-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083956#comment-15083956
 ] 

Hudson commented on HDFS-9605:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9051 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9051/])
HDFS-9605. Add links to failed volumes to explorer.html in HDFS Web UI. 
(wheat9: rev dec8fedb65f6797c22af17ecc901b56a29836da3)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add links to failed volumes to explorer.html in HDFS Web UI
> ---
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7779) Support changing ownership, group and replication in HDFS Web UI

2016-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083955#comment-15083955
 ] 

Hudson commented on HDFS-7779:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9051 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9051/])
HDFS-7779. Support changing ownership, group and replication in HDFS Web 
(wheat9: rev cea0972fa13c4c3f6d6a12179f7e65552d1ae873)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Support changing ownership, group and replication in HDFS Web UI
> 
>
> Key: HDFS-7779
> URL: https://issues.apache.org/jira/browse/HDFS-7779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: Chmod.png, Chown.png, HDFS-7779.01.patch, 
> HDFS-7779.02.patch, HDFS-7779.03.patch, HDFS-7779.04.patch
>
>
> This JIRA converts the owner, group and replication fields into 
> contenteditable fields which can be modified by the user from the browser 
> itself. It too uses the WebHDFS to affect these changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9609) libhfds++: Allow seek to EOF

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083944#comment-15083944
 ] 

Hadoop QA commented on HDFS-9609:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
0s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 8s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 1s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 31s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 38s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 16s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780326/HDFS-9609.HDFS-8707.000.patch
 |
| JIRA Issue | HDFS-9609 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 032e86e7cde9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / d9f0074 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14035/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Max memory used | 79MB |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14035/console |


This message was automatically generated.



> libhfds++: Allow seek to EOF
> 
>
> Key: HDFS-9609
> URL: https://issues.apache.org/jira/browse/HDFS-9609
> Project: Hadoop HDFS
>  Issue Type

[jira] [Commented] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083943#comment-15083943
 ] 

Mingliang Liu commented on HDFS-9498:
-

Thanks for the further discussion.

I should wrote my last comment like this:
{quote}
Suppose the NN is in manual safe mode *and there is no blocks with future GS*, 
blockManager.leaveSafeMode(force) will not be able to start the secret manager 
*by calling {{startSecretManagerIfNecessary()}}*.
{quote}
The reason is that, in {{startSecretManagerIfNecessary()}}, it will check the 
{{isInSafeMode()}} first. This will fail because the manual safe mode is not 
reset yet. So after we clear the manual and resource low safe mode, we may need 
call startSecretManagerIfNecessary again.
{code}
+  if (blockManager.leaveSafeMode(force)) { // block manager will not start 
secret manager successfully in case of manual safe mode
+setManualAndResourceLowSafeMode(false, false);
+startSecretManagerIfNecessary(); // call it again after we clear 
manual and resource low safe mode.
+  }
{code}

For cases when block manager fails to leave safe mode because of future blocks, 
the code is fine without calling {{startSecretManagerIfNecessary}} as it will 
fail to start secret manager anyway.

You're right the {{startSecretManagerIfNecessary}} is now protected by NS lock 
in all callers. Perhaps adding an assertion {{assert hasWriteLock()}} will be 
helpful to keep it being mi-used?

> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.
> Leaving safe mode checks blocks with future GS in {{FSNamesystem}}. This code 
> can also be moved to {{BlockManagerSafeMode}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9612) DistCp worker threads are not terminated after jobs are done.

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083934#comment-15083934
 ] 

Hadoop QA commented on HDFS-9612:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 7s 
{color} | {color:red} hadoop-tools_hadoop-distcp-jdk1.8.0_66 with JDK v1.8.0_66 
generated 1 new issues (was 51, now 51). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 52s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 36s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 3s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780631/HDFS-9612.004.patch |
| JIRA Issue | HDFS-9612 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 34d7e6ab87e3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | mav

[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083926#comment-15083926
 ] 

Jing Zhao commented on HDFS-9047:
-

+1 on removing libwebhdfs from trunk and branch-2. The 000 patch looks good to 
me. +1

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>Assignee: Haohui Mai
> Attachments: HDFS-9047.000.patch
>
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2016-01-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083924#comment-15083924
 ] 

Allen Wittenauer commented on HDFS-9525:


Oozie was just an example.

If I'm firing off several jobs at once via threading, being able to set this as 
config instead of an env var is significantly easier because it means I don't 
have to lock around it. 

> hadoop utilities need to support provided delegation tokens
> ---
>
> Key: HDFS-9525
> URL: https://issues.apache.org/jira/browse/HDFS-9525
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch, 
> HDFS-9525.008.patch, HDFS-9525.branch-2.008.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7779) Support changing ownership, group and replication in HDFS Web UI

2016-01-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083875#comment-15083875
 ] 

Haohui Mai edited comment on HDFS-7779 at 1/5/16 9:57 PM:
--

I've committed the patch to trunk, branch-2 and branch-2.8. Thanks [~raviprak] 
for the contribution.


was (Author: wheat9):
I've committed the patch to trunk and branch-2. Thanks [~raviprak] for the 
contribution.

> Support changing ownership, group and replication in HDFS Web UI
> 
>
> Key: HDFS-7779
> URL: https://issues.apache.org/jira/browse/HDFS-7779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: Chmod.png, Chown.png, HDFS-7779.01.patch, 
> HDFS-7779.02.patch, HDFS-7779.03.patch, HDFS-7779.04.patch
>
>
> This JIRA converts the owner, group and replication fields into 
> contenteditable fields which can be modified by the user from the browser 
> itself. It too uses the WebHDFS to affect these changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9605) Add links to failed volumes to explorer.html in HDFS Web UI

2016-01-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083914#comment-15083914
 ] 

Haohui Mai edited comment on HDFS-9605 at 1/5/16 9:56 PM:
--

I've committed the patch to trunk, branch-2 and branch-2.8. Thanks [~archanat] 
for the contribution.


was (Author: wheat9):
I've committed the patch to trunk and branch-2. Thanks [~archanat] for the 
contribution.

> Add links to failed volumes to explorer.html in HDFS Web UI
> ---
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9605) Add links to failed volumes to explorer.html in HDFS Web UI

2016-01-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9605:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~archanat] for the 
contribution.

> Add links to failed volumes to explorer.html in HDFS Web UI
> ---
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9610) cmake tests don't fail when they should?

2016-01-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083913#comment-15083913
 ] 

Allen Wittenauer commented on HDFS-9610:


When I hit it, yes, it was.  Let me see if it is still failing for me.  I think 
I was playing around with testing HDFS-9325 against Yetus.

> cmake tests don't fail when they should?
> 
>
> Key: HDFS-9610
> URL: https://issues.apache.org/jira/browse/HDFS-9610
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
> Attachments: LastTest.log
>
>
> Playing around with adding ctest output support to Yetus, and I stumbled upon 
> a case where the tests throw errors left and right but claim success.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9605) Add links to failed volumes to explorer.html in HDFS Web UI

2016-01-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9605:
-
Summary: Add links to failed volumes to explorer.html in HDFS Web UI  (was: 
"tab-datanode-volume-failures" is missing from explorer.html)

> Add links to failed volumes to explorer.html in HDFS Web UI
> ---
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9605) "tab-datanode-volume-failures" is missing from explorer.html

2016-01-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083896#comment-15083896
 ] 

Haohui Mai commented on HDFS-9605:
--

+1. Will commit shortly.

> "tab-datanode-volume-failures" is missing from explorer.html
> 
>
> Key: HDFS-9605
> URL: https://issues.apache.org/jira/browse/HDFS-9605
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
> Attachments: HDFS-9605.patch
>
>
> In NameNode UI ,
> "tab-datanode-volume-failures" is missing from explorer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083883#comment-15083883
 ] 

Arpit Agarwal commented on HDFS-9498:
-

Thanks for the clarification [~liuml07].

bq. Suppose the NN is in manual safe mode, blockManager.leaveSafeMode(force) 
will not be able to start the secret manager. 
If safe mode exit fails for any reason then attempting to start the secret 
manager will also fail immediately due to the check you pointed out. Also, the 
only failure case I see in {{leaveSafeMode}} is when there are future blocks.

{{startSecretManagerIfNecessary}} looks like it was intended to be idempotent 
but the synchronization is fishy. The object lock is dropped between sampling 
the {{running}} field and invoking {{startThreads}} which sets it to true. All 
callers currently hold the namesystem write lock so it works out fine though. 
We should file a separate bug to fix the secret manager synchronization or 
remove it and document that the caller should synchronize invocations.

> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.
> Leaving safe mode checks blocks with future GS in {{FSNamesystem}}. This code 
> can also be moved to {{BlockManagerSafeMode}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2016-01-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083881#comment-15083881
 ] 

Haohui Mai commented on HDFS-9525:
--

bq. No. It's extremely useful to be able to do this from a workflow engine 
e.g., Oozie.

I'm confused. Why Oozie is able to set the configuration but not the 
environment variable? From the mechanism point of view they are equivalent. It 
only makes a difference if Oozie can only support a single set of 
configurations for every single workflow.

> hadoop utilities need to support provided delegation tokens
> ---
>
> Key: HDFS-9525
> URL: https://issues.apache.org/jira/browse/HDFS-9525
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch, 
> HDFS-9525.008.patch, HDFS-9525.branch-2.008.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7779) Support changing ownership, group and replication in HDFS Web UI

2016-01-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7779:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~raviprak] for the 
contribution.

> Support changing ownership, group and replication in HDFS Web UI
> 
>
> Key: HDFS-7779
> URL: https://issues.apache.org/jira/browse/HDFS-7779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: Chmod.png, Chown.png, HDFS-7779.01.patch, 
> HDFS-7779.02.patch, HDFS-7779.03.patch, HDFS-7779.04.patch
>
>
> This JIRA converts the owner, group and replication fields into 
> contenteditable fields which can be modified by the user from the browser 
> itself. It too uses the WebHDFS to affect these changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2016-01-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083877#comment-15083877
 ] 

Allen Wittenauer commented on HDFS-9525:


bq. Does it make more sense to extend `HADOOP_TOKEN_FILE_LOCATION` to support 
multiple token files instead of introducing a new configuration variable?

No.  It's extremely useful to be able to do this from a workflow engine e.g., 
Oozie.

> hadoop utilities need to support provided delegation tokens
> ---
>
> Key: HDFS-9525
> URL: https://issues.apache.org/jira/browse/HDFS-9525
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch, 
> HDFS-9525.008.patch, HDFS-9525.branch-2.008.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083866#comment-15083866
 ] 

Hadoop QA commented on HDFS-9047:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
33s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-hdfs-native-client in trunk failed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 45s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780626/HDFS-9047.000.patch |
| JIRA Issue | HDFS-9047 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux b539dc327168 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 28bd138 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |

[jira] [Updated] (HDFS-7779) Support changing ownership, group and replication in HDFS Web UI

2016-01-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7779:
-
Summary: Support changing ownership, group and replication in HDFS Web UI  
(was: Improve the HDFS Web UI browser to allow chowning / chgrp and setting 
replication)

> Support changing ownership, group and replication in HDFS Web UI
> 
>
> Key: HDFS-7779
> URL: https://issues.apache.org/jira/browse/HDFS-7779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: Chmod.png, Chown.png, HDFS-7779.01.patch, 
> HDFS-7779.02.patch, HDFS-7779.03.patch, HDFS-7779.04.patch
>
>
> This JIRA converts the owner, group and replication fields into 
> contenteditable fields which can be modified by the user from the browser 
> itself. It too uses the WebHDFS to affect these changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7779) Improve the HDFS Web UI browser to allow chowning / chgrp and setting replication

2016-01-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083864#comment-15083864
 ] 

Haohui Mai commented on HDFS-7779:
--

+1. Committing it shortly.

> Improve the HDFS Web UI browser to allow chowning / chgrp and setting 
> replication
> -
>
> Key: HDFS-7779
> URL: https://issues.apache.org/jira/browse/HDFS-7779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: Chmod.png, Chown.png, HDFS-7779.01.patch, 
> HDFS-7779.02.patch, HDFS-7779.03.patch, HDFS-7779.04.patch
>
>
> This JIRA converts the owner, group and replication fields into 
> contenteditable fields which can be modified by the user from the browser 
> itself. It too uses the WebHDFS to affect these changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9612) DistCp worker threads are not terminated after jobs are done.

2016-01-05 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9612:
--
Attachment: HDFS-9612.004.patch

Rev04: replace commons.logging with slf4j.

> DistCp worker threads are not terminated after jobs are done.
> -
>
> Key: HDFS-9612
> URL: https://issues.apache.org/jira/browse/HDFS-9612
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9612.001.patch, HDFS-9612.002.patch, 
> HDFS-9612.003.patch, HDFS-9612.004.patch
>
>
> In HADOOP-11827, a producer-consumer style thread pool was introduced to 
> parallelize the task of listing files/directories.
> We have a use case where a distcp job is run during the commit phase of a MR2 
> job. However, it was found distcp does not terminate ProducerConsumer thread 
> pools properly. Because threads are not terminated, those MR2 jobs never 
> finish.
> In a more typical use case where distcp is run as a standalone job, those 
> threads are terminated forcefully when the java process is terminated. So 
> these leaked threads did not become a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9498:

Description: 
[HDFS-4015] counts and reports orphaned blocks  {{numberOfBytesInFutureBlocks}} 
in safe mode. It was implemented in {{BlockManager}}. Per discussion in 
[HDFS-9129] which introduces the {{BlockManagerSafeMode}}, we can move code 
that maintaining orphaned blocks to this class.

Leaving safe mode checks blocks with future GS in {{FSNamesystem}}. This code 
can also be moved to {{BlockManagerSafeMode}}.

  was:[HDFS-4015] counts and reports orphaned blocks  
{{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
{{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
{{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks to 
this class.


> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.
> Leaving safe mode checks blocks with future GS in {{FSNamesystem}}. This code 
> can also be moved to {{BlockManagerSafeMode}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9498:

Target Version/s: 3.0.0  (was: 2.8.0)

> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.
> Leaving safe mode checks blocks with future GS in {{FSNamesystem}}. This code 
> can also be moved to {{BlockManagerSafeMode}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9610) cmake tests don't fail when they should?

2016-01-05 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083849#comment-15083849
 ] 

James Clampffer commented on HDFS-9610:
---

Wow, thanks for reporting this [~aw].  Was this failure deterministic?  I'll 
try and poke around.

> cmake tests don't fail when they should?
> 
>
> Key: HDFS-9610
> URL: https://issues.apache.org/jira/browse/HDFS-9610
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
> Attachments: LastTest.log
>
>
> Playing around with adding ctest output support to Yetus, and I stumbled upon 
> a case where the tests throw errors left and right but claim success.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2016-01-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083836#comment-15083836
 ] 

Haohui Mai commented on HDFS-9525:
--

Does it make more sense to extend `HADOOP_TOKEN_FILE_LOCATION` to support 
multiple token files instead of introducing a new configuration variable?

> hadoop utilities need to support provided delegation tokens
> ---
>
> Key: HDFS-9525
> URL: https://issues.apache.org/jira/browse/HDFS-9525
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch, 
> HDFS-9525.008.patch, HDFS-9525.branch-2.008.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9609) libhfds++: Allow seek to EOF

2016-01-05 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083820#comment-15083820
 ] 

James Clampffer commented on HDFS-9609:
---

Thanks for catching this [~bobthansen].  Looks like a simple fix.  I'll +1 and 
commit pending a good CI run.

> libhfds++: Allow seek to EOF
> 
>
> Key: HDFS-9609
> URL: https://issues.apache.org/jira/browse/HDFS-9609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9609.HDFS-8707.000.patch
>
>
> There is currently a fencepost error in FileHandleImpl::CheckSeekBounds that 
> errors if we attempt to seek to a value equal to file_length.  This should be 
> an acceptable operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9609) libhfds++: Allow seek to EOF

2016-01-05 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9609:
--
Status: Patch Available  (was: Open)

> libhfds++: Allow seek to EOF
> 
>
> Key: HDFS-9609
> URL: https://issues.apache.org/jira/browse/HDFS-9609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9609.HDFS-8707.000.patch
>
>
> There is currently a fencepost error in FileHandleImpl::CheckSeekBounds that 
> errors if we attempt to seek to a value equal to file_length.  This should be 
> an acceptable operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9047:
-
Status: Patch Available  (was: Open)

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>Assignee: Haohui Mai
> Attachments: HDFS-9047.000.patch
>
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9047:
-
Attachment: HDFS-9047.000.patch

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>Assignee: Haohui Mai
> Attachments: HDFS-9047.000.patch
>
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai reassigned HDFS-9047:


Assignee: Haohui Mai

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>Assignee: Haohui Mai
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2016-01-05 Thread HeeSoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HeeSoo Kim updated HDFS-9525:
-
Attachment: HDFS-9525.branch-2.008.patch

Patch for branch-2.

> hadoop utilities need to support provided delegation tokens
> ---
>
> Key: HDFS-9525
> URL: https://issues.apache.org/jira/browse/HDFS-9525
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch, 
> HDFS-9525.008.patch, HDFS-9525.branch-2.008.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083801#comment-15083801
 ] 

Haohui Mai commented on HDFS-9047:
--

The fact that the code does not build out-of-the-box for several consecutive 
releases is worrisome and more importantly, really shameful -- any single 
release should not contain something does not build. And as a community fail to 
figure it out for a year.

+1 on removing it on both branch-2 and trunk.

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083800#comment-15083800
 ] 

Mingliang Liu commented on HDFS-9498:
-

Good suggestion. I'll update the patch.

> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083799#comment-15083799
 ] 

Mingliang Liu commented on HDFS-9498:
-

Thanks [~arpitagarwal] for your comment.

Suppose the NN is in manual safe mode, {{blockManager.leaveSafeMode(force)}} 
will not be able to start the secret manager. The reason is that it needs all 
the safe mode is off (manual, resource low and start up safe mode), while 
manual safe mode is still on. 
{code}
  @Override
  public void startSecretManagerIfNecessary() {
boolean shouldRun = shouldUseDelegationTokens() &&
  !isInSafeMode() && getEditLog().isOpenForWrite();
boolean running = dtSecretManager.isRunning();
if (shouldRun && !running) {
  startSecretManager();
}
  }
{code}
This was not a problem before as we call 
{{setManualAndResourceLowSafeMode(false, false);}} before 
{{blockManager.leaveSafeMode(true);}}.

I thought the {{startSecretManagerIfNecessary();}} is idempotent so it's safe 
do call it again _IfNecessary_.

We may need a better implementation. Ideas are heavily welcome. The goal is 
that NN should not leave safe mode without {{force}} option (either from start 
up safe mode or manual safe mode) in case of orphan blocks.

> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083787#comment-15083787
 ] 

Arpit Agarwal commented on HDFS-9498:
-

Also nitpick: the new orphan block terminology sounds confusing. Orphan blocks 
could be confused with those that don't belong to any files, not necessarily 
those with future generation stamps. Can we replace {{orphanBlocks}} with 
{{blocksWithFutureGenerationStamps}} or similar?

> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9325) Allow the location of hadoop source tree resources to be passed to CMake during a build.

2016-01-05 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9325:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to HDFS-8707.  Thanks for the contribution [~bobthansen]!

> Allow the location of hadoop source tree resources to be passed to CMake 
> during a build.
> 
>
> Key: HDFS-9325
> URL: https://issues.apache.org/jira/browse/HDFS-9325
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Bob Hansen
> Attachments: HDFS-9325.HDFS-8707.001.patch, 
> HDFS-9325.HDFS-8707.002.patch, HDFS-9325.HDFS-8707.003.patch, 
> HDFS-9325.HDFS-8707.004.patch, HDFS-9325.HDFS-8707.005.patch
>
>
> It would be nice if CMake could take an optional parameter with the location 
> of hdfs.h that typically lives at libhdfs/includes/hdfs/hdfs.h, otherwise it 
> would default to this location.  This would be useful for projects using 
> libhdfs++ that gather headers defining library APIs in a single location.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9465) No header files in mvn package

2016-01-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083756#comment-15083756
 ] 

Allen Wittenauer commented on HDFS-9465:


bq. How does the Apache Hadoop community deal with preventing regressions on 
supported platforms?

I estimate the vast majority of developers are working on OS X as their primary 
development platform. So they tend to get caught quickly when they occur.

> No header files in mvn package
> --
>
> Key: HDFS-9465
> URL: https://issues.apache.org/jira/browse/HDFS-9465
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> The current build appears to only include the shared library and no header 
> files to actually use the library in the final maven binary build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8286) Scaling out the namespace using KV store

2016-01-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083750#comment-15083750
 ] 

Haohui Mai commented on HDFS-8286:
--

I have pushed the prototype that corresponds to my [Hadoop summit 
talk|http://www.slideshare.net/HaohuiMai/partial-nshadoopsummit2015] to the 
[feature-HDFS-8286|https://github.com/apache/hadoop/tree/feature-HDFS-8286] 
branch.

> Scaling out the namespace using KV store
> 
>
> Key: HDFS-8286
> URL: https://issues.apache.org/jira/browse/HDFS-8286
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: hdfs-kv-design.pdf
>
>
> Currently the NN keeps the namespace in the memory. To improve the 
> scalability of the namespace, users can scale up by using more RAM or scale 
> out using Federation (i.e., statically partitioning the namespace).
> We would like to remove the limitation of scaling the global namespace. Our 
> vision is that that HDFS should adopt a scalable underlying architecture that 
> allows the global namespace scales linearly.
> We propose to implement the HDFS namespace on top of a key-value (KV) store. 
> Adopting the KV store interfaces allows HDFS to leverage the capability of 
> modern KV store and to become much easier to scale. Going forward, the 
> architecture allows distributing the namespace across multiple machines, or  
> storing only the working set in the memory (HDFS-5389), both of which allows  
> HDFS to manage billions of files using the commodity hardware available today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9498) Move code that tracks orphan blocks to BlockManagerSafeMode

2016-01-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083740#comment-15083740
 ] 

Arpit Agarwal commented on HDFS-9498:
-

Hi [~liuml07], is this new call to fix a pre-existing bug? The call looks 
redundant as {{BlockManagerSafeMode#leaveSafeMode}} called the same function 
earlier.

{code}
-  setManualAndResourceLowSafeMode(false, false);
-  blockManager.leaveSafeMode(true);
+  if (blockManager.leaveSafeMode(force)) {
+setManualAndResourceLowSafeMode(false, false);
+startSecretManagerIfNecessary();
+  }
{code}



> Move code that tracks orphan blocks to BlockManagerSafeMode
> ---
>
> Key: HDFS-9498
> URL: https://issues.apache.org/jira/browse/HDFS-9498
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9498.000.patch, HDFS-9498.001.patch, 
> HDFS-9498.002.patch, HDFS-9498.003.patch
>
>
> [HDFS-4015] counts and reports orphaned blocks  
> {{numberOfBytesInFutureBlocks}} in safe mode. It was implemented in 
> {{BlockManager}}. Per discussion in [HDFS-9129] which introduces the 
> {{BlockManagerSafeMode}}, we can move code that maintaining orphaned blocks 
> to this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9615) Fix variable name typo in DFSConfigKeys#DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT

2016-01-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083709#comment-15083709
 ] 

Arpit Agarwal commented on HDFS-9615:
-

+1 pending Jenkins.

> Fix variable name typo in 
> DFSConfigKeys#DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT
> ---
>
> Key: HDFS-9615
> URL: https://issues.apache.org/jira/browse/HDFS-9615
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: HDFS-9615.001.patch
>
>
> Ran across this typo in the variable name:
> DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT
> should clearly be
> DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDOWN_DEFAULT
> i.e. the "N" and the "W" are swapped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9330) Reconfigure DN deleting duplicate replica on the fly

2016-01-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083704#comment-15083704
 ] 

Arpit Agarwal commented on HDFS-9330:
-

Hi [~xiaobingo], looks like the patch needs to be rebased. It does not apply to 
trunk any more.

> Reconfigure DN deleting duplicate replica on the fly
> 
>
> Key: HDFS-9330
> URL: https://issues.apache.org/jira/browse/HDFS-9330
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9330.001.patch
>
>
> This is to reconfigure
> {code}
> dfs.datanode.duplicate.replica.deletion
> {code}
> without restarting DN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9576) HTrace: collect path/offset/length information on read and write operations

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083681#comment-15083681
 ] 

Hadoop QA commented on HDFS-9576:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} Patch generated 2 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs-client (total was 136, now 137). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780610/HDFS-9576.02.patch |
| JIRA Issue | HDFS-9576 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 646b5b8cfacd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (HDFS-9325) Allow the location of hadoop source tree resources to be passed to CMake during a build.

2016-01-05 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083677#comment-15083677
 ] 

James Clampffer commented on HDFS-9325:
---

Thanks for the clarification.  Everything looks good to me.

+1, I'll commit today

> Allow the location of hadoop source tree resources to be passed to CMake 
> during a build.
> 
>
> Key: HDFS-9325
> URL: https://issues.apache.org/jira/browse/HDFS-9325
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Bob Hansen
> Attachments: HDFS-9325.HDFS-8707.001.patch, 
> HDFS-9325.HDFS-8707.002.patch, HDFS-9325.HDFS-8707.003.patch, 
> HDFS-9325.HDFS-8707.004.patch, HDFS-9325.HDFS-8707.005.patch
>
>
> It would be nice if CMake could take an optional parameter with the location 
> of hdfs.h that typically lives at libhdfs/includes/hdfs/hdfs.h, otherwise it 
> would default to this location.  This would be useful for projects using 
> libhdfs++ that gather headers defining library APIs in a single location.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8356) Document missing properties in hdfs-default.xml

2016-01-05 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HDFS-8356:
-
Status: Patch Available  (was: Open)

> Document missing properties in hdfs-default.xml
> ---
>
> Key: HDFS-8356
> URL: https://issues.apache.org/jira/browse/HDFS-8356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability, test
> Attachments: HDFS-8356.001.patch, HDFS-8356.002.patch, 
> HDFS-8356.003.patch
>
>
> The following properties are currently not defined in hdfs-default.xml. These 
> properties should either be
> A) documented in hdfs-default.xml OR
> B) listed as an exception (with comments, e.g. for internal use) in the 
> TestHdfsConfigFields unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8356) Document missing properties in hdfs-default.xml

2016-01-05 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HDFS-8356:
-
Attachment: HDFS-8356.003.patch

Submit for testing

> Document missing properties in hdfs-default.xml
> ---
>
> Key: HDFS-8356
> URL: https://issues.apache.org/jira/browse/HDFS-8356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability, test
> Attachments: HDFS-8356.001.patch, HDFS-8356.002.patch, 
> HDFS-8356.003.patch
>
>
> The following properties are currently not defined in hdfs-default.xml. These 
> properties should either be
> A) documented in hdfs-default.xml OR
> B) listed as an exception (with comments, e.g. for internal use) in the 
> TestHdfsConfigFields unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9469) DiskBalancer : Add Planner

2016-01-05 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083666#comment-15083666
 ] 

Arpit Agarwal commented on HDFS-9469:
-

Thanks [~anu]. The v5 patch lgtm. The checkstyle issues can be ignored as we 
follow this convention everywhere in the codebase.

[~szetszwo], do you have any further comments? I will hold off committing for a 
day or two.

> DiskBalancer : Add Planner 
> ---
>
> Key: HDFS-9469
> URL: https://issues.apache.org/jira/browse/HDFS-9469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9469-HDFS-1312.001.patch, 
> HDFS-9469-HDFS-1312.002.patch, HDFS-9469-HDFS-1312.003.patch, 
> HDFS-9469-HDFS-1312.004.patch, HDFS-9469-HDFS-1312.005.patch
>
>
> Disk Balancer reads the cluster data and then creates a plan for the data 
> moves based on the snap-shot of the data read from the nodes. This plan is 
> later submitted to data nodes for execution. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9615) Fix variable name typo in DFSConfigKeys#DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT

2016-01-05 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HDFS-9615:
-
Status: Patch Available  (was: Open)

Submit for testing

> Fix variable name typo in 
> DFSConfigKeys#DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT
> ---
>
> Key: HDFS-9615
> URL: https://issues.apache.org/jira/browse/HDFS-9615
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: HDFS-9615.001.patch
>
>
> Ran across this typo in the variable name:
> DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT
> should clearly be
> DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDOWN_DEFAULT
> i.e. the "N" and the "W" are swapped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9615) Fix variable name typo in DFSConfigKeys#DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT

2016-01-05 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HDFS-9615:
-
Attachment: HDFS-9615.001.patch

> Fix variable name typo in 
> DFSConfigKeys#DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT
> ---
>
> Key: HDFS-9615
> URL: https://issues.apache.org/jira/browse/HDFS-9615
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: HDFS-9615.001.patch
>
>
> Ran across this typo in the variable name:
> DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT
> should clearly be
> DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDOWN_DEFAULT
> i.e. the "N" and the "W" are swapped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9615) Fix variable name typo in DFSConfigKeys#DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT

2016-01-05 Thread Ray Chiang (JIRA)
Ray Chiang created HDFS-9615:


 Summary: Fix variable name typo in 
DFSConfigKeys#DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT
 Key: HDFS-9615
 URL: https://issues.apache.org/jira/browse/HDFS-9615
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Trivial


Ran across this typo in the variable name:

DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDONW_DEFAULT

should clearly be

DFS_NAMENODE_MISSING_CHECKPOINT_PERIODS_BEFORE_SHUTDOWN_DEFAULT

i.e. the "N" and the "W" are swapped.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9576) HTrace: collect path/offset/length information on read and write operations

2016-01-05 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9576:

Attachment: HDFS-9576.02.patch

Attaching new patch to include returned length in read trace scope.

It's a little tricky to update the write scope. We can either create a new 
scope on the public {{write}} level, or even move the current write scope 
there. I removed the write change in the new patch, will repurpose this JIRA to 
focus on read only. I'll open a separate JIRA for the write scope change.

> HTrace: collect path/offset/length information on read and write operations
> ---
>
> Key: HDFS-9576
> URL: https://issues.apache.org/jira/browse/HDFS-9576
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, tracing
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9576.00.patch, HDFS-9576.01.patch, 
> HDFS-9576.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083624#comment-15083624
 ] 

Hadoop QA commented on HDFS-8999:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 28s 
{color} | {color:red} Patch generated 4 new checkstyle issues in 
hadoop-hdfs-project (total was 633, now 632). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 46s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 36s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 175m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.namenode.TestINodeAttributeProvider |
|   | hadoop.hdfs

[jira] [Commented] (HDFS-9576) HTrace: collect path/offset/length information on read and write operations

2016-01-05 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083539#comment-15083539
 ] 

Zhe Zhang commented on HDFS-9576:
-

Good point [~iwasakims]! I should have traced the write length one level up -- 
the {{len}} variable in the public {{write}} method. This is to give insights 
on applications' I/O behavior. E.g. if in a cluster most applications send 
large write requests, certain optimizations can be made.

In the next rev I'll also try to trace the return value of read.

> HTrace: collect path/offset/length information on read and write operations
> ---
>
> Key: HDFS-9576
> URL: https://issues.apache.org/jira/browse/HDFS-9576
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, tracing
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9576.00.patch, HDFS-9576.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9574) Reduce client failures during datanode restart

2016-01-05 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083463#comment-15083463
 ] 

Daryn Sharp commented on HDFS-9574:
---

Might consider checking if the bp is registered in {{checkAccess}} to avoid 
every caller explicitly checking the bp before calling {{checkAccess}}.

Sleeping for 1s and and incrementing a counter until it reaches the number of 
configured seconds is fragile - assumes the sleep really was for 1s which may 
not be true if there was a long GC, etc.  I'd suggest using a {{StopWatch}} for 
correctness.

I think something similar needs to be done for the RPC service.  Block tokens 
cannot be authenticated until after registration when it gets the block secret. 
 The dfs client checks {{getReplicaVisibleLength}} for the last block if not 
complete and the rpc client doesn't appear to have any retry proxy.  This is 
likely to affect users that frequently read while writing or appending to a 
file (ex. logging into hdfs, perhaps hbase?).

Blocking in the RPC layer, unlike the data xceiver threads, is not desirable.  
Once the readers jam due to one unregistered bp, admin calls or calls for other 
block pools will be stalled too.  Ideally the DN secret manager should throw a 
{{RetriableException}} if the bp has no secrets.  The client can handle the 
retries.  Appears it would be backwards compat.

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083445#comment-15083445
 ] 

Junping Du commented on HDFS-9047:
--

>From the discussion above (and HDFS-8346), I don't think fixing/removing 
>libwebhdfs is critical enough to stop any release trains. No release can be 
>perfect, but the only thing we can do is to try to address all 
>critical/blocker issues raised in community. Thus, I don't see any problem for 
>previous vote passes without anyone call it out as a priority issue before.

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9455) In distcp, Invalid Argument Error thrown in case of filesystem operation failure

2016-01-05 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083400#comment-15083400
 ] 

Yongjun Zhang commented on HDFS-9455:
-

Hi Guys,

Thanks for reporting the issue and the work here. [~daisuke.kobayashi], Per 
Archana's description at
https://issues.apache.org/jira/browse/HDFS-9455?focusedCommentId=15082670&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15082670
The command line is "hadoop distcp webhdfs://IP:25003/test/testfile 
webhdfs://IP:25003/myp" in a secured cluster with ssl enabled (both src and tgt 
are the same cluster).
Would you please try it out in your env?

The "Invalid argument" msg is not very clear, when ssl is enabled, if we are 
using webhdfs instead of swebhdfs, possibly we can improve the message to 
indicate that.

Thanks.


> In distcp, Invalid Argument Error thrown in case of filesystem operation 
> failure
> 
>
> Key: HDFS-9455
> URL: https://issues.apache.org/jira/browse/HDFS-9455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, security
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
>
> When Filesystem Operation failure happens during discp, 
> Wrong exception : Invalid Argument thrown along with distcp command Usage.
> {color:red} 
> hadoop distcp webhdfs://IP:25003/test/testfile webhdfs://IP:25003/myp
> Invalid arguments: Unexpected end of file from server
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
>  -f   List of files that need to be copied
>  -filelimit   (Deprecated!) Limit number of files copied
>to <= n
>  -iIgnore failures during copy
> .
> {color} 
> Instead Proper Exception has to be thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9612) DistCp worker threads are not terminated after jobs are done.

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083384#comment-15083384
 ] 

Hadoop QA commented on HDFS-9612:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s 
{color} | {color:red} Patch generated 2 new checkstyle issues in 
hadoop-tools/hadoop-distcp (total was 5, now 6). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 43s {color} 
| {color:red} hadoop-distcp in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 43s {color} 
| {color:red} hadoop-distcp in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.tools.util.TestProducerConsumer |
| JDK v1.7.0_91 Failed junit tests | hadoop.tools.util.TestProducerConsumer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780574/HDFS-9612.003.patch |
| JIRA Issue | HDFS-9612 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9b82bb1813fe 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GN

[jira] [Updated] (HDFS-9611) DiskBalancer : Replace htrace json imports with jackson

2016-01-05 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9611:

  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s:   (was: HDFS-1312)
  Status: Resolved  (was: Patch Available)

+1. I committed this to branch HDFS-1312. Thanks for the contribution [~anu].

> DiskBalancer : Replace htrace json imports with jackson
> ---
>
> Key: HDFS-9611
> URL: https://issues.apache.org/jira/browse/HDFS-9611
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: HDFS-1312
>
> Attachments: HDFS-9611-HDFS-1312.001.patch
>
>
> Replace imports with correct json imports.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083336#comment-15083336
 ] 

Allen Wittenauer commented on HDFS-9047:


It's probably worthwhile pointing out that not only has libwebhdfs been broken 
in various branches for more than a year, but the *votes passed* for those 
branches that had releases.  So not only do users not care and/or use that 
functionality, neither do PMC members.

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9612) DistCp worker threads are not terminated after jobs are done.

2016-01-05 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9612:
--
Status: Patch Available  (was: Open)

> DistCp worker threads are not terminated after jobs are done.
> -
>
> Key: HDFS-9612
> URL: https://issues.apache.org/jira/browse/HDFS-9612
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9612.001.patch, HDFS-9612.002.patch, 
> HDFS-9612.003.patch
>
>
> In HADOOP-11827, a producer-consumer style thread pool was introduced to 
> parallelize the task of listing files/directories.
> We have a use case where a distcp job is run during the commit phase of a MR2 
> job. However, it was found distcp does not terminate ProducerConsumer thread 
> pools properly. Because threads are not terminated, those MR2 jobs never 
> finish.
> In a more typical use case where distcp is run as a standalone job, those 
> threads are terminated forcefully when the java process is terminated. So 
> these leaked threads did not become a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9612) DistCp worker threads are not terminated after jobs are done.

2016-01-05 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9612:
--
Attachment: HDFS-9612.003.patch

Rev03:
# Added another (more complex) test case; also, make sure all ProducerConsumer 
tests call shutdown() to terminate threads.
# Simplified ProducerConsumer$Worker.run() logic. In SimpleCopyListing, 
ProducerConsumer.shutdown() is called after all work is consumed, so there is 
no need to consider the case where workers are interrupted in the middle of 
getting or putting or processing a work. Therefore, all workers are supposed to 
wait at 
{code:java}
work = inputQueue.take();
{code}
and if it gets an interrupt, simply return;

> DistCp worker threads are not terminated after jobs are done.
> -
>
> Key: HDFS-9612
> URL: https://issues.apache.org/jira/browse/HDFS-9612
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9612.001.patch, HDFS-9612.002.patch, 
> HDFS-9612.003.patch
>
>
> In HADOOP-11827, a producer-consumer style thread pool was introduced to 
> parallelize the task of listing files/directories.
> We have a use case where a distcp job is run during the commit phase of a MR2 
> job. However, it was found distcp does not terminate ProducerConsumer thread 
> pools properly. Because threads are not terminated, those MR2 jobs never 
> finish.
> In a more typical use case where distcp is run as a standalone job, those 
> threads are terminated forcefully when the java process is terminated. So 
> these leaked threads did not become a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.

2016-01-05 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8999:
--
Attachment: h8999_20160106.patch

h8999_20160106.patch:
- When closing a file with the last block COMMITTED, make sure 
getNumExpectedLocations() > 1.
- When appending to a file with the last block COMMITTED, throw 
LastBlockNotYetCompleteException (a new exception).  Client will retry for a 
few seconds.

> Namenode need not wait for {{blockReceived}} for the last block before 
> completing a file.
> -
>
> Key: HDFS-8999
> URL: https://issues.apache.org/jira/browse/HDFS-8999
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8999_20151228.patch, h8999_20160106.patch
>
>
> This comes out of a discussion in HDFS-8763. Pasting [~jingzhao]'s comment 
> from the jira:
> {quote}
> ...whether we need to let NameNode wait for all the block_received msgs to 
> announce the replica is safe. Looking into the code, now we have
># NameNode knows the DataNodes involved when initially setting up the 
> writing pipeline
># If any DataNode fails during the writing, client bumps the GS and 
> finally reports all the DataNodes included in the new pipeline to NameNode 
> through the updatePipeline RPC.
># When the client received the ack for the last packet of the block (and 
> before the client tries to close the file on NameNode), the replica has been 
> finalized in all the DataNodes.
> Then in this case, when NameNode receives the close request from the client, 
> the NameNode already knows the latest replicas for the block. Currently the 
> checkReplication call only counts in all the replicas that NN has already 
> received the block_received msg, but based on the above #2 and #3, it may be 
> safe to also count in all the replicas in the 
> BlockUnderConstructionFeature#replicas?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083313#comment-15083313
 ] 

Chris Nauroth commented on HDFS-9047:
-

I remain +1 for a full removal of libwebhdfs from trunk, branch-2, branch-2.7 
and branch-2.6.  However, there was not consensus on this plan last time we 
discussed it.  If someone still wants to keep it, and is making a commitment to 
maintenance of it, then our fallback plan would be to backport my HDFS-8346 
patch to branch-2.7 and branch-2.6.

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8346) libwebhdfs build fails during link due to unresolved external symbols.

2016-01-05 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083183#comment-15083183
 ] 

Kihwal Lee commented on HDFS-8346:
--

I felt it needs more discussion, so left a comment in HDFS-9047. 

> libwebhdfs build fails during link due to unresolved external symbols.
> --
>
> Key: HDFS-8346
> URL: https://issues.apache.org/jira/browse/HDFS-8346
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8346.001.patch
>
>
> The libwebhdfs build is currently broken due to various unresolved external 
> symbols during link.  Multiple patches have introduced a few different forms 
> of this breakage.  See comments for full details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8346) libwebhdfs build fails during link due to unresolved external symbols.

2016-01-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083199#comment-15083199
 ] 

Junping Du commented on HDFS-8346:
--

Ok. Thanks for pushing it, [~kihwal]!

> libwebhdfs build fails during link due to unresolved external symbols.
> --
>
> Key: HDFS-8346
> URL: https://issues.apache.org/jira/browse/HDFS-8346
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8346.001.patch
>
>
> The libwebhdfs build is currently broken due to various unresolved external 
> symbols during link.  Multiple patches have introduced a few different forms 
> of this breakage.  See comments for full details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

2016-01-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083185#comment-15083185
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8578:
---

[~vinayrpet] and [~ctrezzo], thanks for the comments.  Will update the patch.

> On upgrade, Datanode should process all storage/data dirs in parallel
> -
>
> Key: HDFS-8578
> URL: https://issues.apache.org/jira/browse/HDFS-8578
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Raju Bairishetti
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch, 
> HDFS-8578-03.patch, HDFS-8578-04.patch, HDFS-8578-05.patch, 
> HDFS-8578-06.patch, HDFS-8578-07.patch, HDFS-8578-08.patch, 
> HDFS-8578-09.patch, HDFS-8578-10.patch, HDFS-8578-11.patch, 
> HDFS-8578-12.patch, HDFS-8578-13.patch, HDFS-8578-14.patch, 
> HDFS-8578-15.patch, HDFS-8578-16.patch, HDFS-8578-17.patch, 
> HDFS-8578-branch-2.6.0.patch, HDFS-8578-branch-2.7-001.patch, 
> HDFS-8578-branch-2.7-002.patch, HDFS-8578-branch-2.7-003.patch, 
> h8578_20151210.patch, h8578_20151211.patch, h8578_20151211b.patch, 
> h8578_20151212.patch, h8578_20151213.patch
>
>
> Right now, during upgrades datanode is processing all the storage dirs 
> sequentially. Assume it takes ~20 mins to process a single storage dir then  
> datanode which has ~10 disks will take around 3hours to come up.
> *BlockPoolSliceStorage.java*
> {code}
>for (int idx = 0; idx < getNumStorageDirs(); idx++) {
>   doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
>   assert getCTime() == nsInfo.getCTime() 
>   : "Data-node and name-node CTimes must be the same.";
> }
> {code}
> It would save lots of time during major upgrades if datanode process all 
> storagedirs/disks parallelly.
> Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.

2016-01-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083176#comment-15083176
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8999:
---

> Is it strange? If we gonna do this, does it mean addBlock(..) can apply the 
> same change?

[~walter.k.su], your example is interesting.  As you mentioned, addBlock(..) 
waits for the second-last block.  close() still waits for the second-last 
block.  Two methods are the same in this sense.

Indeed, we may change addBlock(..) to wait for the third-last block.  However, 
we don't see a need for the moment.

> If block size is small or client writes lots of small files, we have lots of 
> committed blocks. ...

Within a short period of time, it is correct that we have a lot of committed 
blocks.  This is the problem we try to solve here -- datanode send an 
accumulated block receipt instead of a block receipt for each block within a 
short period of time in order to reduce the number of RPCs to NN.

> ... And, what's the meaning of "minRepl"? Why we need "committed" and 
> "completed"? ...

Historically, the notion of minRepl existed before we introduced the notions of 
COMMITTED and COMPLETE blocks for append.  These two states are still useful 
for append after the change.

> Namenode need not wait for {{blockReceived}} for the last block before 
> completing a file.
> -
>
> Key: HDFS-8999
> URL: https://issues.apache.org/jira/browse/HDFS-8999
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8999_20151228.patch
>
>
> This comes out of a discussion in HDFS-8763. Pasting [~jingzhao]'s comment 
> from the jira:
> {quote}
> ...whether we need to let NameNode wait for all the block_received msgs to 
> announce the replica is safe. Looking into the code, now we have
># NameNode knows the DataNodes involved when initially setting up the 
> writing pipeline
># If any DataNode fails during the writing, client bumps the GS and 
> finally reports all the DataNodes included in the new pipeline to NameNode 
> through the updatePipeline RPC.
># When the client received the ack for the last packet of the block (and 
> before the client tries to close the file on NameNode), the replica has been 
> finalized in all the DataNodes.
> Then in this case, when NameNode receives the close request from the client, 
> the NameNode already knows the latest replicas for the block. Currently the 
> checkReplication call only counts in all the replicas that NN has already 
> received the block_received msg, but based on the above #2 and #3, it may be 
> safe to also count in all the replicas in the 
> BlockUnderConstructionFeature#replicas?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2016-01-05 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083175#comment-15083175
 ] 

Kihwal Lee commented on HDFS-9047:
--

libwebhdfs build has been broken in 2.6 and 2.7 for a long time (a year?).  The 
branch-specific precommit fails because of this. If we remove it, we will be 
removing code that has been *unbuildable* in many past releases.  Apparently no 
2.6/2.7 user cares about it. This is different from typical deprecation 
scenario where functioning code is involved.

Long term design and vision aside, what do we do to make the branch precommit 
work for 2.6 and 2.7? Remove it or fix it?

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8346) libwebhdfs build fails during link due to unresolved external symbols.

2016-01-05 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083155#comment-15083155
 ] 

Kihwal Lee commented on HDFS-8346:
--

The immediate goal will be to make the precommit work. We can go either way.  
We can fix it, but the fact that nobody complained about the breakage in 2.6 
and 2.7 so far probably means no one is using it. If we remove it, it is 
different from deprecating functioning code. Rather, we will be removing code 
that has been unbuildable for a long time. 

> libwebhdfs build fails during link due to unresolved external symbols.
> --
>
> Key: HDFS-8346
> URL: https://issues.apache.org/jira/browse/HDFS-8346
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8346.001.patch
>
>
> The libwebhdfs build is currently broken due to various unresolved external 
> symbols during link.  Multiple patches have introduced a few different forms 
> of this breakage.  See comments for full details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9607) Advance Hadoop Architecture (AHA) - HDFS

2016-01-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083066#comment-15083066
 ] 

Steve Loughran commented on HDFS-9607:
--

> I believe currently HDFS is not POSIX compliant. 

neither was NFSv1 due to its relaxed consistency semantics if local caching was 
enabled: doesn't mean there's a need to abandon the API calls. In particular, 
if the team were ever to implement full seek-past-end-of-file + write, that 
write() call would still be the one to use. Having a new method purely for some 
operations would just become a maintenance and test problem. 

so: +1 to not trying to implement POSIX; -1 for proposing to implement a subset 
of it with new operations. Just use the existing API and have it fall when 
appropriate —because that failure point can be changed over time.

Note also that new additions to filesystem APIs should go into 
AbstractFileSystem and its delegation-based APIs; the question of when/how to 
backport stuff to class {{FileSystem}} subclasses is always a point of 
contention.

> Advance Hadoop Architecture (AHA) - HDFS
> 
>
> Key: HDFS-9607
> URL: https://issues.apache.org/jira/browse/HDFS-9607
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Dinesh S. Atreya
>
> Link to Umbrella JIRA
> https://issues.apache.org/jira/browse/HADOOP-12620 
> Provide capability to carry out in-place writes/updates. Only writes in-place 
> are supported where the existing length does not change.
> For example, "Hello World" can be replaced by "Hello HDFS!"
> See 
> https://issues.apache.org/jira/browse/HADOOP-12620?focusedCommentId=15046300&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15046300
>  for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9614) If the path contains '\r' character, can not be deleted from the command line

2016-01-05 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083056#comment-15083056
 ] 

Daniel Templeton commented on HDFS-9614:


This was actually one of the puzzlers in my Hadoop puzzlers talk.  The wildcard 
will not bind to special characters.

The way to delete the file is:

{{python -c 'print "/user/hadoop/tangshangwen\r"' | xargs -n 1 hdfs dfs -rm}}

> If the path contains '\r' character, can not be deleted from the command line
> -
>
> Key: HDFS-9614
> URL: https://issues.apache.org/jira/browse/HDFS-9614
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.7.1
>Reporter: tangshangwen
>Assignee: tangshangwen
>
> In our cluster, I found that some users create directory contains '\r' 
> character, cause we can not be deleted from the command line. for example
> {code:title=Test.java|borderStyle=solid}
> try {
> FileSystem fs = FileSystem.get(new Configuration());
> fs.mkdirs(new Path("/user/hadoop/tangshangwen\r"));
> IOUtils.closeQuietly(fs);
> } catch (IOException e) {
> e.printStackTrace();
> }
> {code}
> Then we delete
> {noformat}
> $ hdfs dfs -ls /user/hadoop/
> Found 4 items
> drwx--   - hadoop supergroup  0 2016-01-05 11:49 
> /user/hadoop/.Trash
> drwx--   - hadoop supergroup  0 2016-01-05 12:04 
> /user/hadoop/.staging
> drwxr-xr-x   - hadoop supergroup  0 2016-01-05 12:42 
> /user/hadoop/DistributedShell
> drwxr-xr-x   - hadoop supergroup  0 2016-01-05 15:46 
> /user/hadoop/tangshangwen
> $ hdfs dfs -rm -R /user/hadoop/tangshang*
> rm: `/user/hadoop/tangshang*': No such file or directory
> $ hdfs dfs -ls /user/hadoop/tangshangwen
> ls: `/user/hadoop/tangshangwen': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9603) Erasure Coding: Use ErasureCoder to encode/decode a block group

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15082996#comment-15082996
 ] 

Hadoop QA commented on HDFS-9603:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 31s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 51s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
|   | hadoop.security.TestShellBasedIdMapping |
| JDK v1.7.0_91 Failed junit tests | hadoop.fs.TestFsShellReturnCode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.or

[jira] [Commented] (HDFS-9614) If the path contains '\r' character, can not be deleted from the command line

2016-01-05 Thread tangshangwen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15082946#comment-15082946
 ] 

tangshangwen commented on HDFS-9614:


This is my test
{noformat}
$hdfs dfs -ls /user/hadoop/
Found 6 items
drwx--   - hadoop supergroup  0 2016-01-05 11:49 /user/hadoop/.Trash
drwx--   - hadoop supergroup  0 2016-01-05 12:04 
/user/hadoop/.staging
drwxr-xr-x   - hadoop supergroup  0 2016-01-05 12:42 
/user/hadoop/DistributedShell
drwxr-xr-x   - hadoop supergroup  0 2016-01-05 15:46 
/user/hadoop/tangshangwen
drwxr-xr-x   - hadoop supergroup  0 2016-01-05 20:07 /user/hadoop/test1
drwxr-xr-x   - hadoop supergroup  0 2016-01-05 20:07 /user/hadoop/test2
$ hdfs dfs -rm -R /user/hadoop/tes*
16/01/05 20:08:10 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 1440 minutes, Emptier interval = 0 minutes.
Moved: 'hdfs://auxo1/user/hadoop/test1' to trash at: 
hdfs://ns1/user/hadoop/.Trash/Current
16/01/05 20:08:10 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 1440 minutes, Emptier interval = 0 minutes.
Moved: 'hdfs://auxo1/user/hadoop/test2' to trash at: 
hdfs://ns1/user/hadoop/.Trash/Current
$ hdfs dfs -rm -R /user/hadoop/tang*
rm: `/user/hadoop/tang*': No such file or directory
{noformat}

> If the path contains '\r' character, can not be deleted from the command line
> -
>
> Key: HDFS-9614
> URL: https://issues.apache.org/jira/browse/HDFS-9614
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.7.1
>Reporter: tangshangwen
>Assignee: tangshangwen
>
> In our cluster, I found that some users create directory contains '\r' 
> character, cause we can not be deleted from the command line. for example
> {code:title=Test.java|borderStyle=solid}
> try {
> FileSystem fs = FileSystem.get(new Configuration());
> fs.mkdirs(new Path("/user/hadoop/tangshangwen\r"));
> IOUtils.closeQuietly(fs);
> } catch (IOException e) {
> e.printStackTrace();
> }
> {code}
> Then we delete
> {noformat}
> $ hdfs dfs -ls /user/hadoop/
> Found 4 items
> drwx--   - hadoop supergroup  0 2016-01-05 11:49 
> /user/hadoop/.Trash
> drwx--   - hadoop supergroup  0 2016-01-05 12:04 
> /user/hadoop/.staging
> drwxr-xr-x   - hadoop supergroup  0 2016-01-05 12:42 
> /user/hadoop/DistributedShell
> drwxr-xr-x   - hadoop supergroup  0 2016-01-05 15:46 
> /user/hadoop/tangshangwen
> $ hdfs dfs -rm -R /user/hadoop/tangshang*
> rm: `/user/hadoop/tangshang*': No such file or directory
> $ hdfs dfs -ls /user/hadoop/tangshangwen
> ls: `/user/hadoop/tangshangwen': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9522) Cleanup o.a.h.hdfs.protocol.SnapshotDiffReport

2016-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15082945#comment-15082945
 ] 

Hadoop QA commented on HDFS-9522:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 1s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
1s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 0s 
{color} | {color:red} Patch generated 38 new checkstyle issues in root (total 
was 89, now 123). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 1s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client introduced 2 new 
FindBugs issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 56s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 50s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 50m 25s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 16s 
{color} | {col

[jira] [Commented] (HDFS-9614) If the path contains '\r' character, can not be deleted from the command line

2016-01-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15082895#comment-15082895
 ] 

Steve Loughran commented on HDFS-9614:
--

have you tried escaping the wildcard?
{code}
hdfs dfs -rm -R /user/hadoop/tangshang\*
{code}

> If the path contains '\r' character, can not be deleted from the command line
> -
>
> Key: HDFS-9614
> URL: https://issues.apache.org/jira/browse/HDFS-9614
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.7.1
>Reporter: tangshangwen
>Assignee: tangshangwen
>
> In our cluster, I found that some users create directory contains '\r' 
> character, cause we can not be deleted from the command line. for example
> {code:title=Test.java|borderStyle=solid}
> try {
> FileSystem fs = FileSystem.get(new Configuration());
> fs.mkdirs(new Path("/user/hadoop/tangshangwen\r"));
> IOUtils.closeQuietly(fs);
> } catch (IOException e) {
> e.printStackTrace();
> }
> {code}
> Then we delete
> {noformat}
> $ hdfs dfs -ls /user/hadoop/
> Found 4 items
> drwx--   - hadoop supergroup  0 2016-01-05 11:49 
> /user/hadoop/.Trash
> drwx--   - hadoop supergroup  0 2016-01-05 12:04 
> /user/hadoop/.staging
> drwxr-xr-x   - hadoop supergroup  0 2016-01-05 12:42 
> /user/hadoop/DistributedShell
> drwxr-xr-x   - hadoop supergroup  0 2016-01-05 15:46 
> /user/hadoop/tangshangwen
> $ hdfs dfs -rm -R /user/hadoop/tangshang*
> rm: `/user/hadoop/tangshang*': No such file or directory
> $ hdfs dfs -ls /user/hadoop/tangshangwen
> ls: `/user/hadoop/tangshangwen': No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9603) Erasure Coding: Use ErasureCoder to encode/decode a block group

2016-01-05 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HDFS-9603:
-
Attachment: HDFS-9603.3.patch

Fix checkstyle.

> Erasure Coding: Use ErasureCoder to encode/decode a block group
> ---
>
> Key: HDFS-9603
> URL: https://issues.apache.org/jira/browse/HDFS-9603
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HDFS-9603.1.patch, HDFS-9603.2.patch, HDFS-9603.3.patch
>
>
> According to design, {{ErasureCoder}} is responsible to encode/decode a block 
> group. Currently however, we directly use {{RawErasureCoder}} to do the work, 
> e.g. in {{DFSStripedOutputStream}}. This task attempts to encapsulate 
> {{RawErasureCoder}} to comply with the design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9607) Advance Hadoop Architecture (AHA) - HDFS

2016-01-05 Thread Dinesh S. Atreya (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15082865#comment-15082865
 ] 

Dinesh S. Atreya commented on HDFS-9607:


A configuration parameter {{dfs.support.writeInPlace}}, which is false by 
default, may need to be introduced as well.

> Advance Hadoop Architecture (AHA) - HDFS
> 
>
> Key: HDFS-9607
> URL: https://issues.apache.org/jira/browse/HDFS-9607
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Dinesh S. Atreya
>
> Link to Umbrella JIRA
> https://issues.apache.org/jira/browse/HADOOP-12620 
> Provide capability to carry out in-place writes/updates. Only writes in-place 
> are supported where the existing length does not change.
> For example, "Hello World" can be replaced by "Hello HDFS!"
> See 
> https://issues.apache.org/jira/browse/HADOOP-12620?focusedCommentId=15046300&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15046300
>  for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >