[ https://issues.apache.org/jira/browse/HDFS-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13877672#comment-13877672 ]
Arpit Agarwal commented on HDFS-5434: ------------------------------------- Buddy, I think we can just use this Jira to make the change. > Write resiliency for replica count 1 > ------------------------------------ > > Key: HDFS-5434 > URL: https://issues.apache.org/jira/browse/HDFS-5434 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 2.2.0 > Reporter: Buddy > Priority: Minor > Attachments: BlockPlacementPolicyMinPipelineSize.java, > BlockPlacementPolicyMinPipelineSizeWithNodeGroup.java, > HDFS-5434-branch-2.patch, HDFS_5434.patch > > > If a file has a replica count of one, the HDFS client is exposed to write > failures if the data node fails during a write. With a pipeline of size of > one, no recovery is possible if the sole data node dies. > A simple fix is to force a minimum pipeline size of 2, while leaving the > replication count as 1. The implementation for this is fairly non-invasive. > Although the replica count is one, the block will be written to two data > nodes instead of one. If one of the data nodes fails during the write, normal > pipeline recovery will ensure that the write succeeds to the surviving data > node. > The existing code in the name node will prune the extra replica when it > receives the block received reports for the finalized block from both data > nodes. This results in the intended replica count of one for the block. > This behavior should be controlled by a configuration option such as > {{dfs.namenode.minPipelineSize}}. > This behavior can be implemented in {{FSNameSystem.getAdditionalBlock()}} by > ensuring that the pipeline size passed to > {{BlockPlacementPolicy.chooseTarget()}} in the replication parameter is: > {code} > max(replication, ${dfs.namenode.minPipelineSize}) > {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)