[
https://issues.apache.org/jira/browse/HADOOP-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Raghu Angadi updated HADOOP-4103:
---------------------------------
Resolution: Fixed
Release Note: Modified dfsadmin -report to report under replicated blocks.
blocks with corrupt replicas, and missing blocks". (was: Modified dfsadmin
-report to count under replicated blocks. blocks with corrupt replicas, and
missing blocks".)
Hadoop Flags: [Incompatible change, Reviewed] (was: [Reviewed,
Incompatible change])
Status: Resolved (was: Patch Available)
I just committed this.
> Alert for missing blocks
> ------------------------
>
> Key: HADOOP-4103
> URL: https://issues.apache.org/jira/browse/HADOOP-4103
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs
> Affects Versions: 0.17.2
> Reporter: Christian Kunz
> Assignee: Raghu Angadi
> Fix For: 0.20.0
>
> Attachments: HADOOP-4103-branch-20.patch, HADOOP-4103.patch,
> HADOOP-4103.patch, HADOOP-4103.patch, HADOOP-4103.patch
>
>
> A whole bunch of datanodes became dead because of some network problems
> resulting in heartbeat timeouts although datanodes were fine.
> Many processes started to fail because of the corrupted filesystem.
> In order to catch and diagnose such problems faster the namenode should
> detect the corruption automatically and provide a way to alert operations. At
> the minimum it should show the fact of corruption on the GUI.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.