[ 
https://issues.apache.org/jira/browse/HDFS-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13878063#comment-13878063
 ] 

Hudson commented on HDFS-5434:
------------------------------

SUCCESS: Integrated in Hadoop-trunk-Commit #5031 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5031/])
HDFS-5434. Change block placement policy constructors from package private to 
protected. (Buddy Taylor via Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1560217)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java


> Write resiliency for replica count 1
> ------------------------------------
>
>                 Key: HDFS-5434
>                 URL: https://issues.apache.org/jira/browse/HDFS-5434
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Buddy
>            Priority: Minor
>             Fix For: 3.0.0, 2.4.0
>
>         Attachments: BlockPlacementPolicyMinPipelineSize.java, 
> BlockPlacementPolicyMinPipelineSizeWithNodeGroup.java, 
> HDFS-5434-branch-2.patch, HDFS_5434.patch
>
>
> If a file has a replica count of one, the HDFS client is exposed to write 
> failures if the data node fails during a write. With a pipeline of size of 
> one, no recovery is possible if the sole data node dies.
> A simple fix is to force a minimum pipeline size of 2, while leaving the 
> replication count as 1. The implementation for this is fairly non-invasive.
> Although the replica count is one, the block will be written to two data 
> nodes instead of one. If one of the data nodes fails during the write, normal 
> pipeline recovery will ensure that the write succeeds to the surviving data 
> node.
> The existing code in the name node will prune the extra replica when it 
> receives the block received reports for the finalized block from both data 
> nodes. This results in the intended replica count of one for the block.
> This behavior should be controlled by a configuration option such as 
> {{dfs.namenode.minPipelineSize}}.
> This behavior can be implemented in {{FSNameSystem.getAdditionalBlock()}} by 
> ensuring that the pipeline size passed to 
> {{BlockPlacementPolicy.chooseTarget()}} in the replication parameter is:
> {code}
> max(replication, ${dfs.namenode.minPipelineSize})
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to