[ https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16843742#comment-16843742 ]
Sammi Chen commented on HDDS-700: --------------------------------- [~swagle], the short answer is "separate NG". My initial target here is to provide a HDFS behavior compatible placement policy. So the network topology likes "/r1/n". Or even If the topology likes "/d1/switch1/r1/n1", this implementation also works. Essentially, this implementation provides the capability to allocate datanode with only leaf's parent involved. In current customizable network topology, admin can use topology with any hierarchy levels. For your example "/d1/r1/ng/n", I think the desired placement policy will be put the first and second replica on different ng on same rack, and the third replica on a different ng on a different rack. So if the requirement is to consider crossing 2 ancestor levels(leaf's parent and grandparent) , there should be a new placement policy implementation here. Likewise, if the requirement is to cross 3 ancestor levels, then another new placement policy implementation is needed. For this implementation, I will add a a topology layer check during initialization. It applies to 3 layers network topology "/r1/n". > Support rack awared node placement policy based on network topology > ------------------------------------------------------------------- > > Key: HDDS-700 > URL: https://issues.apache.org/jira/browse/HDDS-700 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Reporter: Xiaoyu Yao > Assignee: Sammi Chen > Priority: Major > Attachments: HDDS-700.01.patch > > > Implement a new container placement policy implementation based datanode's > network topology. It follows the same rule as HDFS. > By default with 3 replica, two replica will be on the same rack, the third > replica and all the remaining replicas will be on different racks. > > {color:#808080} {color} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org