[ 
https://issues.apache.org/jira/browse/HDFS-2981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13213177#comment-13213177
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-2981:
----------------------------------------------

Hi Todd,

Enabling the feature does not mean that re-transferring a block when a node in 
the pipeline fail.  There is another conf property, 
dfs.client.block.write.replace-datanode-on-failure.policy, for configuring the 
policy.  The default is 
{noformat}
    DEFAULT: 
      Let r be the replication number.
      Let n be the number of existing datanodes.
      Add a new datanode only if r is greater than or equal to 3 and either
      (1) floor(r/2) is greater than or equal to n; or
      (2) r is greater than n and the block is hflushed/appended.
{noformat}

Also, individual applications can set the policy to NEVER if it is desirable.
                
> The default value of 
> dfs.client.block.write.replace-datanode-on-failure.enable should be true
> ---------------------------------------------------------------------------------------------
>
>                 Key: HDFS-2981
>                 URL: https://issues.apache.org/jira/browse/HDFS-2981
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Tsz Wo (Nicholas), SZE
>            Assignee: Tsz Wo (Nicholas), SZE
>         Attachments: h2981_20120221.patch
>
>
> There was a typo earlier in the default value of 
> dfs.client.block.write.replace-datanode-on-failure.enable.  Then, HDFS-2944 
> changed from "ture" to "false".  It should be changed to "true".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to