[ 
https://issues.apache.org/jira/browse/HDFS-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9485:
----------------------------
       Resolution: Fixed
     Hadoop Flags: Reviewed
    Fix Version/s: 2.8.0
           Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-2 and branch-2.8. Thanks [~liuml07] for 
the contribution!

> Make BlockManager#removeFromExcessReplicateMap accept BlockInfo instead of 
> Block
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-9485
>                 URL: https://issues.apache.org/jira/browse/HDFS-9485
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Mingliang Liu
>            Assignee: Mingliang Liu
>            Priority: Minor
>             Fix For: 2.8.0
>
>         Attachments: HDFS-9485.000.patch
>
>
> The {{BlockManager#removeFromExcessReplicateMap()}} method accepts a 
> {{Block}} which is to remove from {{excessReplicateMap}}. However the 
> {{excessReplicateMap}} maps a StorageID to the set of {{BlockInfo}} that are 
> "extra" for the DataNode of the StorageID. Deleting a sub-class object from a 
> collection provided a base-class object happens to work here.
> Alternatively, we can make the {{removeFromExcessReplicateMap}} accept a 
> {{BlockInfo}} object. As the current call stack is passing {{BlockInfo}} 
> object mostly, the code change should be safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to