[ https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15020408#comment-15020408 ]
Walter Su commented on HDFS-9314: --------------------------------- Except the 2 replicas scenario [~mingma] mentioned, the changes to default policy looks good to me. 1. Do you mind copy the old {{pickupReplicaSet}} logic to {{BlockPlacementPolicyRackFaultTolerant}}? It still expects not to reduce rack counts, as possible. 2. {{BlockPlacementPolicyWithNodeGroup}} prefers to pick nodes on the same node-group. If no such nodes, the logic has no difference with the default policy, before we change the default policy. So {{BlockPlacementPolicyWithNodeGroup}} needs to be fixed too. > Improve BlockPlacementPolicyDefault's picking of excess replicas > ---------------------------------------------------------------- > > Key: HDFS-9314 > URL: https://issues.apache.org/jira/browse/HDFS-9314 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Ming Ma > Assignee: Xiao Chen > Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch, > HDFS-9314.003.patch, HDFS-9314.004.patch > > > The test case used in HDFS-9313 identified NullPointerException as well as > the limitation of excess replica picking. If the current replicas are on > {SSD(rack r1), DISK(rack 2), DISK(rack 3), DISK(rack 3)} and the storage > policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's won't > be able to delete SSD replica. -- This message was sent by Atlassian JIRA (v6.3.4#6332)