[ https://issues.apache.org/jira/browse/HDFS-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12727855#action_12727855 ]
Hong Tang commented on HDFS-385: -------------------------------- Some minor nits: - In class BlockPlacementPolicy, the javadoc for chooseTarget contains an unused parameter excludedNodes. - There are two versions of abstract chooseTarget(), is it possible to provide a default implementation for chooseTarget(FSInodeName srcInode, int numOfReplicas, DatanodeDescriptor writer, List<DatanodeDescriptor> chosenNodes, long blocksize) on top of chooseTarget(String srcPath, int numOfReplicas, DatanodeDescriptor writer, List<DatanodeDescriptor> chosenNodes, long blocksize) ? - There is asymmetry for chooseTarget which takes a list of DatanodeDescriptor and returns an array of DatanodeDescriptor. Why not returning a List too? > Design a pluggable interface to place replicas of blocks in HDFS > ---------------------------------------------------------------- > > Key: HDFS-385 > URL: https://issues.apache.org/jira/browse/HDFS-385 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Fix For: 0.21.0 > > Attachments: BlockPlacementPluggable.txt, > BlockPlacementPluggable2.txt, BlockPlacementPluggable3.txt, > BlockPlacementPluggable4.txt, BlockPlacementPluggable4.txt > > > The current HDFS code typically places one replica on local rack, the second > replica on remote random rack and the third replica on a random node of that > remote rack. This algorithm is baked in the NameNode's code. It would be nice > to make the block placement algorithm a pluggable interface. This will allow > experimentation of different placement algorithms based on workloads, > availability guarantees and failure models. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.