Auto-Re: [jira] [Commented] (HDFS-8652) Track BlockInfo instead of Block in CorruptReplicasMap

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8652) Track BlockInfo instead of Block in CorruptReplicasMap

2015-06-23 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598959#comment-14598959
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8652:
---

In BlockToMarkCorrupt, b.corrupted and b.stored may have different generation 
stamp.  Is it okay to replace b.corrupted with b.stored?

> Track BlockInfo instead of Block in CorruptReplicasMap
> --
>
> Key: HDFS-8652
> URL: https://issues.apache.org/jira/browse/HDFS-8652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-8652.000.patch
>
>
> Currently {{CorruptReplicasMap}} uses {{Block}} as its key and records the 
> list of DataNodes with corrupted replicas. For Erasure Coding since a striped 
> block group contains multiple internal blocks with different block ID, we 
> should use {{BlockInfo}} as the key.
> HDFS-8619 is the jira to fix this for EC. To ease merging we will use jira to 
> first make changes in trunk/branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8639) Option for HTTP port of NameNode by MiniDFSClusterManager

2015-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598901#comment-14598901
 ] 

Hudson commented on HDFS-8639:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8055 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8055/])
HDFS-8639. Add Option for NameNode HTTP port in MiniDFSClusterManager. 
Contributed by Kai Sasaki. (jing9: rev 2ba646572185b91d6db1b09837abdcbadbfbeb49)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/test/MiniDFSClusterManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Option for HTTP port of NameNode by MiniDFSClusterManager
> -
>
> Key: HDFS-8639
> URL: https://issues.apache.org/jira/browse/HDFS-8639
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8639.00.patch
>
>
> Current {{MiniDFSClusterManager}} uses 0 as the default rpc port and http 
> port. In case of system test with {{MiniDFSCluster}}, it is difficult to 
> debug because of the random http port. 
> We can add option to make the configuration of the http port for NN web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-06-23 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598902#comment-14598902
 ] 

Zhe Zhang commented on HDFS-7285:
-

Thanks Vinay for reviewing the {{BlockInfo}} code! Those are good catches. I 
will update in the 2nd pass that I'm working on.

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Attachments: ECAnalyzer.py, ECParser.py, HDFS-7285-initial-PoC.patch, 
> HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
> HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
> fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8567) Erasure Coding: SafeMode handles file smaller than a full stripe

2015-06-23 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8567:

   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the 02 patch. I've committed it to the feature branch. Thanks 
[~walter.k.su] for the contribution!

> Erasure Coding: SafeMode handles file smaller than a full stripe
> 
>
> Key: HDFS-8567
> URL: https://issues.apache.org/jira/browse/HDFS-8567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Fix For: HDFS-7285
>
> Attachments: HDFS-8567-HDFS-7285.00.patch, 
> HDFS-8567-HDFS-7285.01.patch, HDFS-8567-HDFS-7285.02.patch, HDFS-8567.00.patch
>
>
> Upload 3 small files, and restart NN. It can't leave safemode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8639) Option for HTTP port of NameNode by MiniDFSClusterManager

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8639) Option for HTTP port of NameNode by MiniDFSClusterManager

2015-06-23 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598885#comment-14598885
 ] 

Kai Sasaki commented on HDFS-8639:
--

[~jingzhao] Thank you so much!

> Option for HTTP port of NameNode by MiniDFSClusterManager
> -
>
> Key: HDFS-8639
> URL: https://issues.apache.org/jira/browse/HDFS-8639
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8639.00.patch
>
>
> Current {{MiniDFSClusterManager}} uses 0 as the default rpc port and http 
> port. In case of system test with {{MiniDFSCluster}}, it is difficult to 
> debug because of the random http port. 
> We can add option to make the configuration of the http port for NN web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8639) Option for HTTP port of NameNode by MiniDFSClusterManager

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8639) Option for HTTP port of NameNode by MiniDFSClusterManager

2015-06-23 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8639:

Priority: Minor  (was: Trivial)

> Option for HTTP port of NameNode by MiniDFSClusterManager
> -
>
> Key: HDFS-8639
> URL: https://issues.apache.org/jira/browse/HDFS-8639
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8639.00.patch
>
>
> Current {{MiniDFSClusterManager}} uses 0 as the default rpc port and http 
> port. In case of system test with {{MiniDFSCluster}}, it is difficult to 
> debug because of the random http port. 
> We can add option to make the configuration of the http port for NN web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8639) Option for HTTP port of NameNode by MiniDFSClusterManager

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8639) Option for HTTP port of NameNode by MiniDFSClusterManager

2015-06-23 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8639:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks [~kaisasak] for the 
contribution!

> Option for HTTP port of NameNode by MiniDFSClusterManager
> -
>
> Key: HDFS-8639
> URL: https://issues.apache.org/jira/browse/HDFS-8639
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8639.00.patch
>
>
> Current {{MiniDFSClusterManager}} uses 0 as the default rpc port and http 
> port. In case of system test with {{MiniDFSCluster}}, it is difficult to 
> debug because of the random http port. 
> We can add option to make the configuration of the http port for NN web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8639) Option for HTTP port of NameNode by MiniDFSClusterManager

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8639) Option for HTTP port of NameNode by MiniDFSClusterManager

2015-06-23 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598881#comment-14598881
 ] 

Jing Zhao commented on HDFS-8639:
-

Thanks for working on this, [~kaisasak]! The patch looks good to me. +1. I will 
commit it shortly.

> Option for HTTP port of NameNode by MiniDFSClusterManager
> -
>
> Key: HDFS-8639
> URL: https://issues.apache.org/jira/browse/HDFS-8639
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
> Attachments: HDFS-8639.00.patch
>
>
> Current {{MiniDFSClusterManager}} uses 0 as the default rpc port and http 
> port. In case of system test with {{MiniDFSCluster}}, it is difficult to 
> debug because of the random http port. 
> We can add option to make the configuration of the http port for NN web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Assigned] (HDFS-5277) hadoop fs -expunge does not work for federated namespace

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Assigned] (HDFS-5277) hadoop fs -expunge does not work for federated namespace

2015-06-23 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina reassigned HDFS-5277:


Assignee: J.Andreina

> hadoop fs -expunge does not work for federated namespace 
> -
>
> Key: HDFS-5277
> URL: https://issues.apache.org/jira/browse/HDFS-5277
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.5-alpha
>Reporter: Vrushali C
>Assignee: J.Andreina
>
> We noticed that hadoop fs -expunge command does not work across federated 
> namespace. This seems to look at only /user//.Trash instead of 
> traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-5277) hadoop fs -expunge does not work for federated namespace

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-5277) hadoop fs -expunge does not work for federated namespace

2015-06-23 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598858#comment-14598858
 ] 

J.Andreina commented on HDFS-5277:
--

I would like to work on this issue . 
[~vrushalic] , if you are working on this please reassign to you.

> hadoop fs -expunge does not work for federated namespace 
> -
>
> Key: HDFS-5277
> URL: https://issues.apache.org/jira/browse/HDFS-5277
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.5-alpha
>Reporter: Vrushali C
>
> We noticed that hadoop fs -expunge command does not work across federated 
> namespace. This seems to look at only /user//.Trash instead of 
> traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8623) Refactor NameNode handling of invalid, corrupt, and under-recovery blocks

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8623) Refactor NameNode handling of invalid, corrupt, and under-recovery blocks

2015-06-23 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8623:

Attachment: HDFS-8623.03.patch

Sorry attached a stale patch (with 1 wrong line of code). Updating the patch 
and triggering Jenkins again.

> Refactor NameNode handling of invalid, corrupt, and under-recovery blocks
> -
>
> Key: HDFS-8623
> URL: https://issues.apache.org/jira/browse/HDFS-8623
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8623.00.patch, HDFS-8623.01.patch, 
> HDFS-8623.02.patch, HDFS-8623.03.patch
>
>
> In order to support striped blocks in invalid, corrupt, and under-recovery 
> blocks handling, HDFS-7907 introduces some refactors. This JIRA aims to merge 
> these changes to trunk first to minimize and cleanup HDFS-7285 merge patch so 
> that it only contains striping/EC logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Commented] (HDFS-8655) Refactor accesses to INodeFile#blocks

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization

2015-06-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8656:
--
Attachment: hdfs-8656.002.patch

Rebase on the new multi-SbNN support in trunk.

> Preserve compatibility of ClientProtocol#rollingUpgrade after finalization
> --
>
> Key: HDFS-8656
> URL: https://issues.apache.org/jira/browse/HDFS-8656
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: hdfs-8656.001.patch, hdfs-8656.002.patch
>
>
> HDFS-7645 changed rollingUpgradeInfo to still return an RUInfo after 
> finalization, so the DNs can differentiate between rollback and a 
> finalization. However, this breaks compatibility for the user facing APIs, 
> which always expect a null after finalization. Let's fix this and edify it in 
> unit tests.
> As an additional improvement, isFinalized and isStarted are part of the Java 
> API, but not in the JMX output of RollingUpgradeInfo. It'd be nice to expose 
> these booleans so JMX users don't need to do the != 0 check that possibly 
> exposes our implementation details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8567) Erasure Coding: SafeMode handles file smaller than a full stripe

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8655) Refactor accesses to INodeFile#blocks

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598818#comment-14598818
 ] 

Hadoop QA commented on HDFS-8655:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 50s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 30s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  1 
new checkstyle issues (total was 41, now 37). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 14s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 156m 54s | Tests failed in hadoop-hdfs. |
| | | 203m 10s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741398/HDFS-8655.00.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 122cad6 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11456/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11456/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11456/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11456/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11456/console |


This message was automatically generated.

> Refactor accesses to INodeFile#blocks
> -
>
> Key: HDFS-8655
> URL: https://issues.apache.org/jira/browse/HDFS-8655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8655.00.patch
>
>
> When enabling INodeFile support for striped blocks (mainly in HDFS-7749), 
> HDFS-7285 branch generalized the concept of blocks under an inode. Now 
> {{INodeFile#blocks}} only contains contiguous blocks of an inode. This JIRA 
> separates out code refactors for this purpose. Two main changes:
> # Rename {{setBlocks}} to {{setContiguousBlocks}}
> # Replace direct accesses to {{INodeFile#blocks}} to {{getBlocks}}
> It also contains some code cleanups introduced in the branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8646) Prune cached replicas from DatanodeDescriptor state on replica invalidation

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8567) Erasure Coding: SafeMode handles file smaller than a full stripe

2015-06-23 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8567:

Attachment: HDFS-8567-HDFS-7285.02.patch

You are right. triggerHeartBeats() doesn't wait firstBlockReport to finished. I 
should have use triggerBlockReports().
Uploaded 02 patch fixes failed test.

> Erasure Coding: SafeMode handles file smaller than a full stripe
> 
>
> Key: HDFS-8567
> URL: https://issues.apache.org/jira/browse/HDFS-8567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8567-HDFS-7285.00.patch, 
> HDFS-8567-HDFS-7285.01.patch, HDFS-8567-HDFS-7285.02.patch, HDFS-8567.00.patch
>
>
> Upload 3 small files, and restart NN. It can't leave safemode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8646) Prune cached replicas from DatanodeDescriptor state on replica invalidation

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598811#comment-14598811
 ] 

Hadoop QA commented on HDFS-8646:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 48s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  3 
new checkstyle issues (total was 655, now 647). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 15s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 160m  2s | Tests passed in hadoop-hdfs. 
|
| | | 206m  9s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741401/hdfs-8646.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 122cad6 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11455/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11455/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11455/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11455/console |


This message was automatically generated.

> Prune cached replicas from DatanodeDescriptor state on replica invalidation
> ---
>
> Key: HDFS-8646
> URL: https://issues.apache.org/jira/browse/HDFS-8646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.3.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-8646.001.patch, hdfs-8646.002.patch
>
>
> Currently we remove blocks from the DD's CachedBlockLists on node failure and 
> on cache report, but not on replica invalidation. This can lead to an invalid 
> situation where we return a LocatedBlock with cached locations that are not 
> backed by an on-disk replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598802#comment-14598802
 ] 

Hadoop QA commented on HDFS-8656:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741418/hdfs-8656.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 49dfad9 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11457/console |


This message was automatically generated.

> Preserve compatibility of ClientProtocol#rollingUpgrade after finalization
> --
>
> Key: HDFS-8656
> URL: https://issues.apache.org/jira/browse/HDFS-8656
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: hdfs-8656.001.patch
>
>
> HDFS-7645 changed rollingUpgradeInfo to still return an RUInfo after 
> finalization, so the DNs can differentiate between rollback and a 
> finalization. However, this breaks compatibility for the user facing APIs, 
> which always expect a null after finalization. Let's fix this and edify it in 
> unit tests.
> As an additional improvement, isFinalized and isStarted are part of the Java 
> API, but not in the JMX output of RollingUpgradeInfo. It'd be nice to expose 
> these booleans so JMX users don't need to do the != 0 check that possibly 
> exposes our implementation details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8644) OzoneHandler : Add volume handler

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598772#comment-14598772
 ] 

Hadoop QA commented on HDFS-8644:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 53s | Pre-patch HDFS-7240 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 55s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 18s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 16s | The applied patch generated  3 
new checkstyle issues (total was 7, now 10). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 27s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 170m 43s | Tests failed in hadoop-hdfs. |
| | | 217m 46s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.web.TestWebHDFSXAttr |
|   | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting |
| Timed out tests | org.apache.hadoop.hdfs.server.mover.TestStorageMover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741362/hdfs-8644-HDFS-7240.001.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7240 / f08bf36 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11454/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11454/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11454/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11454/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11454/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11454/console |


This message was automatically generated.

> OzoneHandler : Add volume handler
> -
>
> Key: HDFS-8644
> URL: https://issues.apache.org/jira/browse/HDFS-8644
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8644-HDFS-7240.001.patch
>
>
> Add volume handler logic that dispatches volume related calls to the right 
> interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8644) OzoneHandler : Add volume handler

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-23 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598770#comment-14598770
 ] 

Rakesh R commented on HDFS-6564:


Thanks [~wheat9] for the review and commit. Thanks [~busbey] for the review.

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch, 
> HDFS-6564-03.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8493) Consolidate truncate() related implementation in a single class

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Commented] (HDFS-8496) Calling stopWriter() with FSDatasetImpl lock held may block other threads

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8493) Consolidate truncate() related implementation in a single class

2015-06-23 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8493:
---
Attachment: HDFS-8493-008.patch

> Consolidate truncate() related implementation in a single class
> ---
>
> Key: HDFS-8493
> URL: https://issues.apache.org/jira/browse/HDFS-8493
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-8493-001.patch, HDFS-8493-002.patch, 
> HDFS-8493-003.patch, HDFS-8493-004.patch, HDFS-8493-005.patch, 
> HDFS-8493-006.patch, HDFS-8493-007.patch, HDFS-8493-007.patch
>
>
> This jira proposes to consolidate truncate() related methods into a single 
> class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8493) Consolidate truncate() related implementation in a single class

2015-06-23 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8493:
---
Attachment: (was: HDFS-8493-008.patch)

> Consolidate truncate() related implementation in a single class
> ---
>
> Key: HDFS-8493
> URL: https://issues.apache.org/jira/browse/HDFS-8493
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-8493-001.patch, HDFS-8493-002.patch, 
> HDFS-8493-003.patch, HDFS-8493-004.patch, HDFS-8493-005.patch, 
> HDFS-8493-006.patch, HDFS-8493-007.patch, HDFS-8493-007.patch
>
>
> This jira proposes to consolidate truncate() related methods into a single 
> class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8493) Consolidate truncate() related implementation in a single class

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8496) Calling stopWriter() with FSDatasetImpl lock held may block other threads

2015-06-23 Thread zhouyingchao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598757#comment-14598757
 ] 

zhouyingchao commented on HDFS-8496:


[~cmccabe], Any comments?

> Calling stopWriter() with FSDatasetImpl lock held may  block other threads
> --
>
> Key: HDFS-8496
> URL: https://issues.apache.org/jira/browse/HDFS-8496
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
> Attachments: HDFS-8496-001.patch
>
>
> On a DN of a HDFS 2.6 cluster, we noticed some DataXceiver threads and  
> heartbeat threads are blocked for quite a while on the FSDatasetImpl lock. By 
> looking at the stack, we found the calling of stopWriter() with FSDatasetImpl 
> lock blocked everything.
> Following is the heartbeat stack, as an example, to show how threads are 
> blocked by FSDatasetImpl lock:
> {code}
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:152)
> - waiting to lock <0x0007701badc0> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getAvailable(FsVolumeImpl.java:191)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:144)
> - locked <0x000770465dc0> (a java.lang.Object)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:575)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:680)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> The thread which held the FSDatasetImpl lock is just sleeping to wait another 
> thread to exit in stopWriter(). The stack is:
> {code}
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Thread.join(Thread.java:1194)
> - locked <0x0007636953b8> (a org.apache.hadoop.util.Daemon)
> at 
> org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.stopWriter(ReplicaInPipeline.java:183)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.recoverCheck(FsDatasetImpl.java:982)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.recoverClose(FsDatasetImpl.java:1026)
> - locked <0x0007701badc0> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:624)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> In this case, we deployed quite a lot other workloads on the DN, the local 
> file system and disk is quite busy. We guess this is why the stopWriter took 
> quite a long time.
> Any way, it is not quite reasonable to call stopWriter with the FSDatasetImpl 
> lock held.   In HDFS-7999, the createTemporary() is changed to call 
> stopWriter without FSDatasetImpl lock. We guess we should do so in the other 
> three methods: recoverClose()/recoverAppend/recoverRbw().
> I'll try to finish a patch for this today. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8649) Default ACL is not inherited if directory is generated by FileSystem.create interface

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8649) Default ACL is not inherited if directory is generated by FileSystem.create interface

2015-06-23 Thread zhouyingchao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598752#comment-14598752
 ] 

zhouyingchao commented on HDFS-8649:


[~cnauroth] Any comments ?

> Default ACL is not inherited if directory is generated by FileSystem.create 
> interface
> -
>
> Key: HDFS-8649
> URL: https://issues.apache.org/jira/browse/HDFS-8649
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>
> I have a directory /acltest/t, whose acl is as following:
> {code}
> # file: /acltest/t
> # owner: hdfs_tst_admin
> # group: supergroup
> user::rwx
> group::rwx
> mask::rwx
> other::---
> default:user::rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> {code}
> My program create a file /acltest/t/a/b using the FileSystem.create 
> interface. The acl of directory /acltest/t/a is as following:
> {code}
> # file: /acltest/t/a
> # owner: hdfs_tst_admin
> # group: supergroup
> user::rwx
> group::rwx
> mask::rwx
> other::---
> default:user::rwx
> default:group::rwx
> default:mask::rwx
> default:other::rwx
> {code}
> As you can see, the child directory "b" did not inherit its parent's default 
> acl for other.
> By looking into the implementation, the FileSystem.create interface will 
> automatically create non-existing entries in the path, it is done by calling 
> FSNamesystem.mkdirsRecursively and hard-coded the third param 
> (inheritPermission) as true. In FSNamesystem.mkdirsRecursively, when 
> inheritPermission is true, the parent's real permission (rather than 
> calculation from default acl) would be used as the new directory's permission.
> Is this behavior correct?  The default acl is not worked as people expected. 
> It kind of render many access issues in our setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-7894) Rolling upgrade readiness is not updated in jmx until query command is issued.

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-7894) Rolling upgrade readiness is not updated in jmx until query command is issued.

2015-06-23 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7894:
---
Labels:   (was: BB2015-05-TBR)

> Rolling upgrade readiness is not updated in jmx until query command is issued.
> --
>
> Key: HDFS-7894
> URL: https://issues.apache.org/jira/browse/HDFS-7894
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Fix For: 2.7.1
>
> Attachments: HDFS-7894-002.patch, HDFS-7894-003.patch, HDFS-7894.patch
>
>
> When a hdfs rolling upgrade is started and a rollback image is 
> created/uploaded, the active NN does not update its {{rollingUpgradeInfo}} 
> until it receives a query command via RPC. This results in inconsistent info 
> being showing up in the web UI and its jmx page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598736#comment-14598736
 ] 

Jesse Yates commented on HDFS-6440:
---

Yeah, that failure looks wildly unrelated. Someone messing about with the poms?

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-8657) Update docs for mSNN

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598733#comment-14598733
 ] 

Hudson commented on HDFS-6440:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8054 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8054/])
HDFS-6440. Support more than 2 NameNodes. Contributed by Jesse Yates. (atm: rev 
49dfad942970459297f72632ed8dfd353e0c86de)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSNNTopology.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandbyWithQJM.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-0.23-reserved.tgz
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencingWithReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RemoteNameNodeInfo.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRemoteNameNodeInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HAStressTestHarness.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyCheckpoints.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestSeveralNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestZKFailoverController.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/TestBookKeeperHACheckpoints.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-22-dfs-dir.tgz
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAConfiguration.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/MiniZKFCCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop1-bbw.tgz
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestFailoverWithBlockTokensEnabled.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniQJMHACluster.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-2-reserved.tgz
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointConf.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-1-reserved.tgz
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManage

[jira] [Updated] (HDFS-8657) Update docs for mSNN

2015-06-23 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HDFS-8657:
--
Attachment: hdfs-8657-v0.patch

Patch for updating HDFSHighAvailabilityWithQJM.md. No major changes except 
updating the example to use 3NNs in the configs, rather than two and some nits 
to indicate you can use 2+ in HA.

> Update docs for mSNN
> 
>
> Key: HDFS-8657
> URL: https://issues.apache.org/jira/browse/HDFS-8657
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hdfs-8657-v0.patch
>
>
> After the commit of HDFS-6440, some docs need to be updated to reflect the 
> new support for more than 2 NNs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598700#comment-14598700
 ] 

Aaron T. Myers commented on HDFS-6440:
--

Cool, thanks. I'll review HDFS-8657 whenever you post a patch.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Created] (HDFS-8657) Update docs for mSNN

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Commented] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Created] (HDFS-8657) Update docs for mSNN

2015-06-23 Thread Jesse Yates (JIRA)
Jesse Yates created HDFS-8657:
-

 Summary: Update docs for mSNN
 Key: HDFS-8657
 URL: https://issues.apache.org/jira/browse/HDFS-8657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jesse Yates
Assignee: Jesse Yates
Priority: Minor
 Fix For: 3.0.0


After the commit of HDFS-6440, some docs need to be updated to reflect the new 
support for more than 2 NNs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598698#comment-14598698
 ] 

Jesse Yates commented on HDFS-6440:
---

Great, thanks [~atm]! Just filed HDFS-8657

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598696#comment-14598696
 ] 

Hadoop QA commented on HDFS-8630:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:red}-1{color} | javac |   7m 34s | The applied patch generated  1  
additional warning messages. |
| {color:red}-1{color} | javadoc |   9m 35s | The applied patch generated  1  
additional warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 47s | The applied patch generated  3 
new checkstyle issues (total was 140, now 143). |
| {color:red}-1{color} | whitespace |   0m  2s | The patch has 7  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m  9s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 159m 42s | Tests passed in hadoop-hdfs. 
|
| {color:green}+1{color} | hdfs tests |   0m 18s | Tests passed in 
hadoop-hdfs-client. |
| | | 207m 37s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-client |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741384/HDFS-8630.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 122cad6 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11452/artifact/patchprocess/diffJavacWarnings.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11452/artifact/patchprocess/diffJavadocWarnings.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11452/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11452/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11452/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11452/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11452/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11452/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11452/console |


This message was automatically generated.

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598677#comment-14598677
 ] 

Lars Hofhansl commented on HDFS-6440:
-

Yeah. Thanks [~atm]!

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-7645) Rolling upgrade is restoring blocks from trash multiple times

2015-06-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598661#comment-14598661
 ] 

Andrew Wang commented on HDFS-7645:
---

I actually discovered that HDFS-7894 "fixed" this for the JMX by adding a check 
for {{isRollingUpgrade()}}. I changed HDFS-8656 to also do this for the 
ClientProtocol API, please review there if interested.

> Rolling upgrade is restoring blocks from trash multiple times
> -
>
> Key: HDFS-7645
> URL: https://issues.apache.org/jira/browse/HDFS-7645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Nathan Roberts
>Assignee: Keisuke Ogiwara
> Fix For: 2.8.0
>
> Attachments: HDFS-7645.01.patch, HDFS-7645.02.patch, 
> HDFS-7645.03.patch, HDFS-7645.04.patch, HDFS-7645.05.patch, 
> HDFS-7645.06.patch, HDFS-7645.07.patch
>
>
> When performing an HDFS rolling upgrade, the trash directory is getting 
> restored twice when under normal circumstances it shouldn't need to be 
> restored at all. iiuc, the only time these blocks should be restored is if we 
> need to rollback a rolling upgrade. 
> On a busy cluster, this can cause significant and unnecessary block churn 
> both on the datanodes, and more importantly in the namenode.
> The two times this happens are:
> 1) restart of DN onto new software
> {code}
>   private void doTransition(DataNode datanode, StorageDirectory sd,
>   NamespaceInfo nsInfo, StartupOption startOpt) throws IOException {
> if (startOpt == StartupOption.ROLLBACK && sd.getPreviousDir().exists()) {
>   Preconditions.checkState(!getTrashRootDir(sd).exists(),
>   sd.getPreviousDir() + " and " + getTrashRootDir(sd) + " should not 
> " +
>   " both be present.");
>   doRollback(sd, nsInfo); // rollback if applicable
> } else {
>   // Restore all the files in the trash. The restored files are retained
>   // during rolling upgrade rollback. They are deleted during rolling
>   // upgrade downgrade.
>   int restored = restoreBlockFilesFromTrash(getTrashRootDir(sd));
>   LOG.info("Restored " + restored + " block files from trash.");
> }
> {code}
> 2) When heartbeat response no longer indicates a rollingupgrade is in progress
> {code}
>   /**
>* Signal the current rolling upgrade status as indicated by the NN.
>* @param inProgress true if a rolling upgrade is in progress
>*/
>   void signalRollingUpgrade(boolean inProgress) throws IOException {
> String bpid = getBlockPoolId();
> if (inProgress) {
>   dn.getFSDataset().enableTrash(bpid);
>   dn.getFSDataset().setRollingUpgradeMarker(bpid);
> } else {
>   dn.getFSDataset().restoreTrash(bpid);
>   dn.getFSDataset().clearRollingUpgradeMarker(bpid);
> }
>   }
> {code}
> HDFS-6800 and HDFS-6981 were modifying this behavior making it not completely 
> clear whether this is somehow intentional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8623) Refactor NameNode handling of invalid, corrupt, and under-recovery blocks

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598660#comment-14598660
 ] 

Hadoop QA commented on HDFS-8623:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 16s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:green}+1{color} | javac |   7m 33s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 57s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  4 
new checkstyle issues (total was 605, now 576). |
| {color:red}-1{color} | whitespace |   0m  8s | The patch has 3  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 19s | The patch appears to introduce 2 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 17s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 156m 20s | Tests failed in hadoop-hdfs. |
| | | 203m 37s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestReservedRawPaths |
|   | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
|   | hadoop.hdfs.TestParallelShortCircuitRead |
|   | hadoop.hdfs.TestDisableConnCache |
|   | hadoop.hdfs.TestConnCache |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.TestSetrepDecreasing |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.TestHostsFiles |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.TestFileStatus |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestReadWhileWriting |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation |
|   | hadoop.hdfs.server.namenode.ha.TestQuotasWithHA |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.TestDatanodeLayoutUpgrade |
|   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.server.datanode.TestCachingStrategy |
|   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
|   | hadoop.hdfs.web.TestWebHDFSXAttr |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks |
|   | hadoop.hdfs.TestParallelShortCircuitReadUnCached |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestBlockMissingException |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.TestAbandonBlock |
|   | hadoop.hdfs.TestRemoteBlockReader2 |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestBlockReaderLocalLegacy |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.server.namenode.TestParallelImageWrite |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.TestFileConcurrentReader |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN |
|   | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.tools.TestDebugAdmin |
|   | hadoop.hdfs.web.TestWebHDFSForHA |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation |
|   | hadoop.hdfs.TestSafeMode |
|   | hadoop.hdfs.TestDFSRename |
|   | hadoop.hdfs.TestSeekBug |
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.

Auto-Re: [jira] [Commented] (HDFS-7645) Rolling upgrade is restoring blocks from trash multiple times

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization

2015-06-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8656:
--
Status: Patch Available  (was: Open)

> Preserve compatibility of ClientProtocol#rollingUpgrade after finalization
> --
>
> Key: HDFS-8656
> URL: https://issues.apache.org/jira/browse/HDFS-8656
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: hdfs-8656.001.patch
>
>
> HDFS-7645 changed rollingUpgradeInfo to still return an RUInfo after 
> finalization, so the DNs can differentiate between rollback and a 
> finalization. However, this breaks compatibility for the user facing APIs, 
> which always expect a null after finalization. Let's fix this and edify it in 
> unit tests.
> As an additional improvement, isFinalized and isStarted are part of the Java 
> API, but not in the JMX output of RollingUpgradeInfo. It'd be nice to expose 
> these booleans so JMX users don't need to do the != 0 check that possibly 
> exposes our implementation details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8623) Refactor NameNode handling of invalid, corrupt, and under-recovery blocks

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization

2015-06-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8656:
--
Attachment: hdfs-8656.001.patch

Patch attached, beefs up the tests. We return null now after finalization in 
user facing APIs (java and JMX) via a check to {{isRollingUpgrade()}}, but DNs 
can access it directly in the heartbeat response.

> Preserve compatibility of ClientProtocol#rollingUpgrade after finalization
> --
>
> Key: HDFS-8656
> URL: https://issues.apache.org/jira/browse/HDFS-8656
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: hdfs-8656.001.patch
>
>
> HDFS-7645 changed rollingUpgradeInfo to still return an RUInfo after 
> finalization, so the DNs can differentiate between rollback and a 
> finalization. However, this breaks compatibility for the user facing APIs, 
> which always expect a null after finalization. Let's fix this and edify it in 
> unit tests.
> As an additional improvement, isFinalized and isStarted are part of the Java 
> API, but not in the JMX output of RollingUpgradeInfo. It'd be nice to expose 
> these booleans so JMX users don't need to do the != 0 check that possibly 
> exposes our implementation details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization

2015-06-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8656:
--
Description: 
HDFS-7645 changed rollingUpgradeInfo to still return an RUInfo after 
finalization, so the DNs can differentiate between rollback and a finalization. 
However, this breaks compatibility for the user facing APIs, which always 
expect a null after finalization. Let's fix this and edify it in unit tests.

As an additional improvement, isFinalized and isStarted are part of the Java 
API, but not in the JMX output of RollingUpgradeInfo. It'd be nice to expose 
these booleans so JMX users don't need to do the != 0 check that possibly 
exposes our implementation details.

  was:isFinalized and isStarted are part of the Java API, but not in the JMX 
output of RollingUpgradeInfo. It'd be nice to expose these booleans so JMX 
users don't need to do the != 0 check that possibly exposes our implementation 
details.

   Priority: Critical  (was: Major)
 Issue Type: Bug  (was: Improvement)
Summary: Preserve compatibility of ClientProtocol#rollingUpgrade after 
finalization  (was: Add additional fields to RollingUpgradeInfo JMX bean)

> Preserve compatibility of ClientProtocol#rollingUpgrade after finalization
> --
>
> Key: HDFS-8656
> URL: https://issues.apache.org/jira/browse/HDFS-8656
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
>
> HDFS-7645 changed rollingUpgradeInfo to still return an RUInfo after 
> finalization, so the DNs can differentiate between rollback and a 
> finalization. However, this breaks compatibility for the user facing APIs, 
> which always expect a null after finalization. Let's fix this and edify it in 
> unit tests.
> As an additional improvement, isFinalized and isStarted are part of the Java 
> API, but not in the JMX output of RollingUpgradeInfo. It'd be nice to expose 
> these booleans so JMX users don't need to do the != 0 check that possibly 
> exposes our implementation details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8515) Abstract a DTP/2 HTTP/2 server

2015-06-23 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598638#comment-14598638
 ] 

Duo Zhang commented on HDFS-8515:
-

{quote}
I'm yet to be convinced that testing of mutli threading is required right now. 
Maybe having some coverage of the basic funciationlities is a higher priority.
{quote}
The basic tests are in {{TestHttp2Server}}. Thanks.

> Abstract a DTP/2 HTTP/2 server
> --
>
> Key: HDFS-8515
> URL: https://issues.apache.org/jira/browse/HDFS-8515
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HDFS-8515-v1.patch, HDFS-8515-v2.patch, 
> HDFS-8515-v3.patch, HDFS-8515-v4.patch, HDFS-8515.patch
>
>
> Discussed in HDFS-8471.
> https://issues.apache.org/jira/browse/HDFS-8471?focusedCommentId=14568196&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14568196



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8515) Abstract a DTP/2 HTTP/2 server

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-6440:
-
  Resolution: Fixed
Target Version/s: 3.0.0  (was: 2.6.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've just committed this change to trunk.

Thanks a lot for the monster contribution, Jesse. Thanks also very much to Eddy 
for doing a bunch of initial reviews, and to Lars for keeping on me to review 
this patch. :)

[~jesse_yates] - mind filing a follow-up JIRA to amend the docs appropriately?

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8515) Abstract a DTP/2 HTTP/2 server

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8515) Abstract a DTP/2 HTTP/2 server

2015-06-23 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598634#comment-14598634
 ] 

Duo Zhang commented on HDFS-8515:
-

{quote}
o.a.h.web.http2
{quote}
Then in hadoop-hdfs or hadoop-common? Seems all the codes in hdfs are under 
o.a.h.fs or o.a.h.hdfs...

{quote}
encoder is not thread-safe. It seems to me the right approach is to run the 
write in the event loop of the parent channel. The read path might have the 
same issue.
{quote}
I think there are already in the event loop? Channel.read, Channel.write and 
Channel.flush call the methods in DefaultChannelPipeline, and then call the 
methods in TailContext, there will switch to run in EventLoop.

{quote}
To me both LastChunkedInput and LastMessage look like more of an optimization 
right now. A simpler approach is to send an empty HEADER with the end-of-stream 
bit on to tell the remote peer that the stream has been closed.
{quote}
This is used to notice {{Http2StreamChannel}} we need to send an endStream to 
the remote side, so at least something like a {{LastMessage}} is needed(Think 
of {{LastHttpContent}}). I'd say that sending an endStream with the last data 
frame is an optimization, but I think it is simple enough to implement now?

{quote}
It can be a utility class instead of asking all HTTP2 test cases to inherit it.
{quote}
Any example? And what is the benefit of using an utility class instead of a 
parent class? Thanks.

> Abstract a DTP/2 HTTP/2 server
> --
>
> Key: HDFS-8515
> URL: https://issues.apache.org/jira/browse/HDFS-8515
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HDFS-8515-v1.patch, HDFS-8515-v2.patch, 
> HDFS-8515-v3.patch, HDFS-8515-v4.patch, HDFS-8515.patch
>
>
> Discussed in HDFS-8471.
> https://issues.apache.org/jira/browse/HDFS-8471?focusedCommentId=14568196&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14568196



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-23 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598630#comment-14598630
 ] 

Aaron T. Myers commented on HDFS-6440:
--

I re-ran the failed tests locally and they all passed, and I don't think those 
tests have much of anything to do with this patch anyway.

+1, the latest patch looks good to me. I realized just now doing some final 
looks at the patch that we should also update the 
HDFSHighAvailabilityWithQJM.md document to indicate that more than two NNs are 
now supported, but I think that can be done as a follow-up JIRA since 
continuing to rebase this patch is pretty unwieldy.

I'm going to commit this momentarily.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8654) OzoneHandler : Add ACL support

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8654) OzoneHandler : Add ACL support

2015-06-23 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-8654:
---
Component/s: ozone

> OzoneHandler : Add ACL support
> --
>
> Key: HDFS-8654
> URL: https://issues.apache.org/jira/browse/HDFS-8654
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>
> Add ACL support which is needed by Ozone Buckets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8653) Code cleanup for DatanodeManager, DatanodeDescriptor and DatanodeStorageInfo

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8653) Code cleanup for DatanodeManager, DatanodeDescriptor and DatanodeStorageInfo

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598602#comment-14598602
 ] 

Hadoop QA commented on HDFS-8653:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 42s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 12s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 15s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 159m 44s | Tests passed in hadoop-hdfs. 
|
| | | 205m 47s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741364/HDFS-8653.00.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 122cad6 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11450/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11450/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11450/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11450/console |


This message was automatically generated.

> Code cleanup for DatanodeManager, DatanodeDescriptor and DatanodeStorageInfo
> 
>
> Key: HDFS-8653
> URL: https://issues.apache.org/jira/browse/HDFS-8653
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8653.00.patch
>
>
> While updating the {{blockmanagement}} module to distribute erasure coding 
> recovery work to Datanode, the HDFS-7285 branch also did some code cleanup 
> that should be merged into trunk independently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-7645) Rolling upgrade is restoring blocks from trash multiple times

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-7645) Rolling upgrade is restoring blocks from trash multiple times

2015-06-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598582#comment-14598582
 ] 

Andrew Wang commented on HDFS-7645:
---

Hey Vinay,

It might be okay to sneak in this incompat change, I doubt there are many users 
of this API. It's also possible to write an "after" check will work with both 
old and new NNs to check for finalization:

{code}
// before
if (ruinfo == null)
// after
if (ruinfo == null || ruinfo.isFinalized())
{code}

One related change we could also make is adding boolean isStarted and 
isFinalized to the JMX output, since that way callers won't have to do a "!= 0" 
check. Essentially all the normal benefits of a getter. I just filed HDFS-8656 
to do this.

In hindsight it would have been nice to always return an RUInfo so the check 
could just be {{if (ruinfo.isFinalized)}}. The need for null checking is a bit 
ugly.

> Rolling upgrade is restoring blocks from trash multiple times
> -
>
> Key: HDFS-7645
> URL: https://issues.apache.org/jira/browse/HDFS-7645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Nathan Roberts
>Assignee: Keisuke Ogiwara
> Fix For: 2.8.0
>
> Attachments: HDFS-7645.01.patch, HDFS-7645.02.patch, 
> HDFS-7645.03.patch, HDFS-7645.04.patch, HDFS-7645.05.patch, 
> HDFS-7645.06.patch, HDFS-7645.07.patch
>
>
> When performing an HDFS rolling upgrade, the trash directory is getting 
> restored twice when under normal circumstances it shouldn't need to be 
> restored at all. iiuc, the only time these blocks should be restored is if we 
> need to rollback a rolling upgrade. 
> On a busy cluster, this can cause significant and unnecessary block churn 
> both on the datanodes, and more importantly in the namenode.
> The two times this happens are:
> 1) restart of DN onto new software
> {code}
>   private void doTransition(DataNode datanode, StorageDirectory sd,
>   NamespaceInfo nsInfo, StartupOption startOpt) throws IOException {
> if (startOpt == StartupOption.ROLLBACK && sd.getPreviousDir().exists()) {
>   Preconditions.checkState(!getTrashRootDir(sd).exists(),
>   sd.getPreviousDir() + " and " + getTrashRootDir(sd) + " should not 
> " +
>   " both be present.");
>   doRollback(sd, nsInfo); // rollback if applicable
> } else {
>   // Restore all the files in the trash. The restored files are retained
>   // during rolling upgrade rollback. They are deleted during rolling
>   // upgrade downgrade.
>   int restored = restoreBlockFilesFromTrash(getTrashRootDir(sd));
>   LOG.info("Restored " + restored + " block files from trash.");
> }
> {code}
> 2) When heartbeat response no longer indicates a rollingupgrade is in progress
> {code}
>   /**
>* Signal the current rolling upgrade status as indicated by the NN.
>* @param inProgress true if a rolling upgrade is in progress
>*/
>   void signalRollingUpgrade(boolean inProgress) throws IOException {
> String bpid = getBlockPoolId();
> if (inProgress) {
>   dn.getFSDataset().enableTrash(bpid);
>   dn.getFSDataset().setRollingUpgradeMarker(bpid);
> } else {
>   dn.getFSDataset().restoreTrash(bpid);
>   dn.getFSDataset().clearRollingUpgradeMarker(bpid);
> }
>   }
> {code}
> HDFS-6800 and HDFS-6981 were modifying this behavior making it not completely 
> clear whether this is somehow intentional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8515) Abstract a DTP/2 HTTP/2 server

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8515) Abstract a DTP/2 HTTP/2 server

2015-06-23 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598578#comment-14598578
 ] 

Haohui Mai commented on HDFS-8515:
--

It might make sense to move all the code into the {o.a.h.web.http2} package.
{code}
+
+  // whether to log http2 frame for debugging
+  public static final String  DFS_HTTP2_VERBOSE_KEY = "dfs.http2.verbose";
+  public static final boolean DFS_HTTP2_VERBOSE_DEFAULT = false;

+if (verbose) {
+  frameReader =
+  new Http2InboundFrameLogger(new DefaultHttp2FrameReader(),
+  FRAME_LOGGER);
+  frameWriter =
+  new Http2OutboundFrameLogger(new DefaultHttp2FrameWriter(),
+  FRAME_LOGGER);

{code}

Instead of adding a new configuration, a better approach might be adding a 
logger into {{ServerHttp2ConectionHandler}} and check whether the debug log is 
enabled.

{code}
+  private static final ChannelMetadata METADATA = new ChannelMetadata(false);
+
+  private final ChannelHandlerContext http2ConnHandlerCtx;
+
+  private final Http2Stream connStream;
+
+  private final Http2Stream stream;
+
{code}

There should be no empty lines between these members.

{code}
+  private final Http2LocalFlowController localFlowController;
+
+  private final Http2RemoteFlowController remoteFlowController;
+
{code}

It might make sense to separate the flow control logic into a separate patch.

{code}
+encoder.writeHeaders(http2ConnHandlerCtx, stream.id(),
+  (Http2Headers) msg, 0, endOfStream, 
http2ConnHandlerCtx.newPromise());
{code}

encoder is not thread-safe. It seems to me the right approach is to run the 
write in the event loop of the parent channel. The read path might have the 
same issue.

{code}
+public class LastChunkedInput implements ChunkedInput {
+public class LastMessage {
{code}

To me both {{LastChunkedInput}} and {{LastMessage}} look like more of an 
optimization right now. A simpler approach is to send an empty HEADER with the 
end-of-stream bit on to tell the remote peer that the stream has been closed.

{code}
+public abstract class AbstractTestHttp2Server {
{code}

It can be a utility class instead of asking all HTTP2 test cases to inherit it.

{code}
+public class TestHttp2ServerMultiThread extends AbstractTestHttp2Server {
{code}

I'm yet to be convinced that testing of mutli threading is required right now. 
Maybe having some coverage of the basic funciationlities is a higher priority.


> Abstract a DTP/2 HTTP/2 server
> --
>
> Key: HDFS-8515
> URL: https://issues.apache.org/jira/browse/HDFS-8515
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HDFS-8515-v1.patch, HDFS-8515-v2.patch, 
> HDFS-8515-v3.patch, HDFS-8515-v4.patch, HDFS-8515.patch
>
>
> Discussed in HDFS-8471.
> https://issues.apache.org/jira/browse/HDFS-8471?focusedCommentId=14568196&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14568196



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8356) Document missing properties in hdfs-default.xml

2015-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14598576#comment-14598576
 ] 

Hadoop QA commented on HDFS-8356:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 54s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  0s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 51s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 30s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  18m  9s | Tests failed in hadoop-hdfs. |
| | |  62m 26s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.TestLeaseRecovery |
|   | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.TestRead |
|   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
|   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.server.namenode.TestSaveNamespace |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.tools.TestGetGroups |
|   | hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd |
|   | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions |
|   | hadoop.hdfs.server.datanode.TestRefreshNamenodes |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.fs.viewfs.TestViewFsWithXAttrs |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
|   | hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.TestSafeMode |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.TestParallelShortCircuitRead |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.namenode.TestAuditLogger |
|   | hadoop.hdfs.TestFSInputChecker |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.tools.TestDebugAdmin |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.hdfs.qjournal.TestNNWithQJM |
|   | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestDatanodeReport |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.TestDFSShellGenericOptions |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.fs.TestEnhancedByteBufferAccess |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.tracing.TestTraceAdmin |
|   | hadoop.hdfs.server.namenode.TestNameNodeRpcServer |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade |
|   | hadoop.hdfs.TestMultiThreadedHflush |
|   | hadoop.hdfs.TestParallelRead |
|   | hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot |
|   | hadoop.hdfs.server.namenode.ha.TestHAFsck |
|   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.tools.TestStoragePolicyCommands |
|   | hadoop.hdfs.TestDFSRemove |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
|   | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.server.namenode.TestHDFSConcat |
|   | hadoop.hdfs.tools.TestDFSHAAdmin |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.Te

Auto-Re: [jira] [Commented] (HDFS-8356) Document missing properties in hdfs-default.xml

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Created] (HDFS-8656) Add additional fields to RollingUpgradeInfo JMX bean

2015-06-23 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-8656:
-

 Summary: Add additional fields to RollingUpgradeInfo JMX bean
 Key: HDFS-8656
 URL: https://issues.apache.org/jira/browse/HDFS-8656
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rolling upgrades
Affects Versions: 2.8.0
Reporter: Andrew Wang
Assignee: Andrew Wang


isFinalized and isStarted are part of the Java API, but not in the JMX output 
of RollingUpgradeInfo. It'd be nice to expose these booleans so JMX users don't 
need to do the != 0 check that possibly exposes our implementation details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Created] (HDFS-8656) Add additional fields to RollingUpgradeInfo JMX bean

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-8646) Prune cached replicas from DatanodeDescriptor state on replica invalidation

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8646) Prune cached replicas from DatanodeDescriptor state on replica invalidation

2015-06-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8646:
--
Attachment: hdfs-8646.002.patch

Thanks for reviewing Colin. I decided to put the cache locations in the outer 
for loop since most of the time no replicas are cached, so this saves us an 
iteration. Also hit checkstyle, which was complaining about public in an 
interface. The other three are line-length warnings.

> Prune cached replicas from DatanodeDescriptor state on replica invalidation
> ---
>
> Key: HDFS-8646
> URL: https://issues.apache.org/jira/browse/HDFS-8646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.3.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-8646.001.patch, hdfs-8646.002.patch
>
>
> Currently we remove blocks from the DD's CachedBlockLists on node failure and 
> on cache report, but not on replica invalidation. This can lead to an invalid 
> situation where we return a LocatedBlock with cached locations that are not 
> backed by an on-disk replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8655) Refactor accesses to INodeFile#blocks

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-8655) Refactor accesses to INodeFile#blocks

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8655) Refactor accesses to INodeFile#blocks

2015-06-23 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8655:

Attachment: HDFS-8655.00.patch

> Refactor accesses to INodeFile#blocks
> -
>
> Key: HDFS-8655
> URL: https://issues.apache.org/jira/browse/HDFS-8655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8655.00.patch
>
>
> When enabling INodeFile support for striped blocks (mainly in HDFS-7749), 
> HDFS-7285 branch generalized the concept of blocks under an inode. Now 
> {{INodeFile#blocks}} only contains contiguous blocks of an inode. This JIRA 
> separates out code refactors for this purpose. Two main changes:
> # Rename {{setBlocks}} to {{setContiguousBlocks}}
> # Replace direct accesses to {{INodeFile#blocks}} to {{getBlocks}}
> It also contains some code cleanups introduced in the branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Created] (HDFS-8655) Refactor accesses to INodeFile#blocks

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8655) Refactor accesses to INodeFile#blocks

2015-06-23 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8655:

Status: Patch Available  (was: Open)

> Refactor accesses to INodeFile#blocks
> -
>
> Key: HDFS-8655
> URL: https://issues.apache.org/jira/browse/HDFS-8655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8655.00.patch
>
>
> When enabling INodeFile support for striped blocks (mainly in HDFS-7749), 
> HDFS-7285 branch generalized the concept of blocks under an inode. Now 
> {{INodeFile#blocks}} only contains contiguous blocks of an inode. This JIRA 
> separates out code refactors for this purpose. Two main changes:
> # Rename {{setBlocks}} to {{setContiguousBlocks}}
> # Replace direct accesses to {{INodeFile#blocks}} to {{getBlocks}}
> It also contains some code cleanups introduced in the branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8655) Refactor accesses to INodeFile#blocks

2015-06-23 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8655:
---

 Summary: Refactor accesses to INodeFile#blocks
 Key: HDFS-8655
 URL: https://issues.apache.org/jira/browse/HDFS-8655
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang


When enabling INodeFile support for striped blocks (mainly in HDFS-7749), 
HDFS-7285 branch generalized the concept of blocks under an inode. Now 
{{INodeFile#blocks}} only contains contiguous blocks of an inode. This JIRA 
separates out code refactors for this purpose. Two main changes:
# Rename {{setBlocks}} to {{setContiguousBlocks}}
# Replace direct accesses to {{INodeFile#blocks}} to {{getBlocks}}

It also contains some code cleanups introduced in the branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8644) OzoneHandler : Add volume handler

2015-06-23 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8644) OzoneHandler : Add volume handler

2015-06-23 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-8644:
---
Status: Patch Available  (was: Open)

> OzoneHandler : Add volume handler
> -
>
> Key: HDFS-8644
> URL: https://issues.apache.org/jira/browse/HDFS-8644
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8644-HDFS-7240.001.patch
>
>
> Add volume handler logic that dispatches volume related calls to the right 
> interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >