[ 
https://issues.apache.org/jira/browse/HDFS-15001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15001:
----------------------------
    Attachment: HDFS-15001.patch
        Status: Patch Available  (was: Open)

> Automatically add back the failed volume if it gets better again
> ----------------------------------------------------------------
>
>                 Key: HDFS-15001
>                 URL: https://issues.apache.org/jira/browse/HDFS-15001
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>            Reporter: Yang Yun
>            Assignee: Yang Yun
>            Priority: Major
>         Attachments: HDFS-15001.patch
>
>
> For any reason, if one volume is regard as failed and removed from datanode, 
> we have to restart the datanode to add it back.
> I add  a more check on background for failed volume, and add them back 
> automatically if it gets better again.
> Reuse the refreshVolumes logical for the old way can not delete all things 
> related the failed valume. 
> Add a new method checkFailedVolume to DatasetVolumeChecker to schedule task 
> for checking failed volume repeatly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to