[ 
https://issues.apache.org/jira/browse/HADOOP-4910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hairong Kuang updated HADOOP-4910:
----------------------------------

    Attachment: overReplicated.patch

I am still working on a junit testcase, but attaching the fix first.

Yes, this bug affects 0.17 as well as later releases.

> NameNode should exclude corrupt replicas when choosing excessive replicas to 
> delete
> -----------------------------------------------------------------------------------
>
>                 Key: HADOOP-4910
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4910
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: Hairong Kuang
>         Attachments: overReplicated.patch
>
>
> Currently, when NameNode handles an over-replicated block in 
> FSNamesystem#processOverReplicatedBlock, it excludes ones already in 
> excessReplicateMap and decommissed ones, but it treats a corrupt replica as a 
> valid one. This may lead to unnecessary deletion of more replicas and thus 
> cause data lose. It should exclude corrupt replicas as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to