[ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-6524:
-----------------------------------
    Labels: BB2015-05-TBR  (was: )

> chooseDataNode decides retry times considering with block replica number
> ------------------------------------------------------------------------
>
>                 Key: HDFS-6524
>                 URL: https://issues.apache.org/jira/browse/HDFS-6524
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client
>    Affects Versions: 3.0.0
>            Reporter: Liang Xie
>            Assignee: Liang Xie
>            Priority: Minor
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to