[ https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12780185#action_12780185 ]
stack commented on HDFS-630: ---------------------------- I was going to commit this in a day or so unless objection (The formatting is a little odd at times in this patch but Cosmin seems to be doing his best to follow the formatting that is already in-place in the files he's patching, at least for the few I checked). > In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific > datanodes when locating the next block. > ------------------------------------------------------------------------------------------------------------------- > > Key: HDFS-630 > URL: https://issues.apache.org/jira/browse/HDFS-630 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs client > Affects Versions: 0.21.0 > Reporter: Ruyue Ma > Assignee: Ruyue Ma > Priority: Minor > Attachments: 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, > 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, > 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, > HDFS-630.patch > > > created from hdfs-200. > If during a write, the dfsclient sees that a block replica location for a > newly allocated block is not-connectable, it re-requests the NN to get a > fresh set of replica locations of the block. It tries this > dfs.client.block.write.retries times (default 3), sleeping 6 seconds between > each retry ( see DFSClient.nextBlockOutputStream). > This setting works well when you have a reasonable size cluster; if u have > few datanodes in the cluster, every retry maybe pick the dead-datanode and > the above logic bails out. > Our solution: when getting block location from namenode, we give nn the > excluded datanodes. The list of dead datanodes is only for one block > allocation. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.