[ https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024385#comment-15024385 ]
Junping Du commented on HDFS-9314: ---------------------------------- Let's keep BlockPlacementPolicyWithNodeGroup#pickupReplicaSet there, otherwise it will mess up block placement after deleting extra replicas. Actually, I think it is better to make BlockPlacementPolicyWithNodeGroup override verifyBlockPlacement() to check status according to its own policy. I will file a separated JIRA to track this. > Improve BlockPlacementPolicyDefault's picking of excess replicas > ---------------------------------------------------------------- > > Key: HDFS-9314 > URL: https://issues.apache.org/jira/browse/HDFS-9314 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Ming Ma > Assignee: Xiao Chen > Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch, > HDFS-9314.003.patch, HDFS-9314.004.patch, HDFS-9314.005.patch, > HDFS-9314.006.patch > > > The test case used in HDFS-9313 identified NullPointerException as well as > the limitation of excess replica picking. If the current replicas are on > {SSD(rack r1), DISK(rack 2), DISK(rack 3), DISK(rack 3)} and the storage > policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's won't > be able to delete SSD replica. -- This message was sent by Atlassian JIRA (v6.3.4#6332)