[ https://issues.apache.org/jira/browse/HDFS-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13877876#comment-13877876 ]
Hadoop QA commented on HDFS-5434: --------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12623487/HDFS-5434-branch-2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5925//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5925//console This message is automatically generated. > Write resiliency for replica count 1 > ------------------------------------ > > Key: HDFS-5434 > URL: https://issues.apache.org/jira/browse/HDFS-5434 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 2.2.0 > Reporter: Buddy > Priority: Minor > Attachments: BlockPlacementPolicyMinPipelineSize.java, > BlockPlacementPolicyMinPipelineSizeWithNodeGroup.java, > HDFS-5434-branch-2.patch, HDFS_5434.patch > > > If a file has a replica count of one, the HDFS client is exposed to write > failures if the data node fails during a write. With a pipeline of size of > one, no recovery is possible if the sole data node dies. > A simple fix is to force a minimum pipeline size of 2, while leaving the > replication count as 1. The implementation for this is fairly non-invasive. > Although the replica count is one, the block will be written to two data > nodes instead of one. If one of the data nodes fails during the write, normal > pipeline recovery will ensure that the write succeeds to the surviving data > node. > The existing code in the name node will prune the extra replica when it > receives the block received reports for the finalized block from both data > nodes. This results in the intended replica count of one for the block. > This behavior should be controlled by a configuration option such as > {{dfs.namenode.minPipelineSize}}. > This behavior can be implemented in {{FSNameSystem.getAdditionalBlock()}} by > ensuring that the pipeline size passed to > {{BlockPlacementPolicy.chooseTarget()}} in the replication parameter is: > {code} > max(replication, ${dfs.namenode.minPipelineSize}) > {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)