[ 
https://issues.apache.org/jira/browse/HADOOP-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hairong Kuang updated HADOOP-1300:
----------------------------------

    Attachment: excessDel.patch

This patch deletes excess replicas in the following priority order:
1. Keep as many racks as possible
2. delete a replica with the least free space

> deletion of excess replicas does not take into account 'rack-locality'
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-1300
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1300
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Koji Noguchi
>            Assignee: Hairong Kuang
>         Attachments: excessDel.patch
>
>
> One rack went down today, resulting in one missing block/file.
> Looking at the log, this block was originally over-replicated. 
> 3 replicas on one rack and 1 replica on another.
> Namenode decided to delete the latter, leaving 3 replicas on the same rack.
> It'll be nice if the deletion is also rack-aware.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to