[ 
https://issues.apache.org/jira/browse/HDFS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Molkov updated HDFS-1300:
--------------------------------

    Status: Patch Available  (was: Open)

submitting for hudson

> Decommissioning nodes does not increase replication priority
> ------------------------------------------------------------
>
>                 Key: HDFS-1300
>                 URL: https://issues.apache.org/jira/browse/HDFS-1300
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.21.0, 0.20.2, 0.20.1, 0.20-append, 0.20.3, 0.22.0
>            Reporter: Dmytro Molkov
>            Assignee: Dmytro Molkov
>             Fix For: 0.22.0
>
>         Attachments: HDFS-1300.2.patch, HDFS-1300.3.patch, HDFS-1300.patch
>
>
> Currently when you decommission a node each block is only inserted into 
> neededReplications if it is not there yet. This causes a problem of a block 
> sitting in a low priority queue when all replicas sit on the nodes being 
> decommissioned.
> The common usecase for decommissioning nodes for us is proactively exclude 
> them before they went bad, so it would be great to get the blocks at risk 
> onto the live datanodes as quickly as possible.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to