[ 
https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014801#comment-15014801
 ] 

Xiao Chen commented on HDFS-9314:
---------------------------------

Thanks [~mingma] for the review, I'll be working on the next rev.
1 quick clarification: I thought about passing in {{rackMap}}, but that needs 
to add the extra parameter to both {{chooseReplicaToDelete}} and 
{{pickupReplicaSet}}.
- I guess {{@VisibleForTesting public chooseReplicaToDelete}} should be fine. 
- But changing the {{protected pickupReplicaSet}} would require modifying all 
subclasses of {{BlockPlacementPolicyDefault}}, which is marked as 
{{@InterfaceAudience.Private}} so [no external subclasses are 
expected|https://hadoop.apache.org/docs/r2.7.1/api/org/apache/hadoop/classification/InterfaceAudience.html].
 So we'll just update all usages in current code and not worry about 
compatibility.

Are we on the same page here?

> Improve BlockPlacementPolicyDefault's picking of excess replicas
> ----------------------------------------------------------------
>
>                 Key: HDFS-9314
>                 URL: https://issues.apache.org/jira/browse/HDFS-9314
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Ming Ma
>            Assignee: Xiao Chen
>         Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch, 
> HDFS-9314.003.patch
>
>
> The test case used in HDFS-9313 identified NullPointerException as well as 
> the limitation of excess replica picking. If the current replicas are on 
> {SSD(rack r1), DISK(rack 2), DISK(rack 3), DISK(rack 3)} and the storage 
> policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's won't 
> be able to delete SSD replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to