[
https://issues.apache.org/jira/browse/HADOOP-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12679419#action_12679419
]
Hairong Kuang commented on HADOOP-5133:
---------------------------------------
If we mark all replicas as corrupt, two questions remain to be answered:
1. What should be the length of the block: longer one or shorter one?
2. Should the file remains to be open or could we close the file?
> FSNameSystem#addStoredBlock does not handle inconsistent block length
> correctly
> -------------------------------------------------------------------------------
>
> Key: HADOOP-5133
> URL: https://issues.apache.org/jira/browse/HADOOP-5133
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.18.2
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.20.0
>
> Attachments: inconsistentLen.patch, inconsistentLen1.patch,
> inconsistentLen2.patch
>
>
> Currently NameNode treats either the new replica or existing replicas as
> corrupt if the new replica's length is inconsistent with NN recorded block
> length. The correct behavior should be
> 1. For a block that is not under construction, the new replica should be
> marked as corrupt if its length is inconsistent (no matter shorter or longer)
> with the NN recorded block length;
> 2. For an under construction block, if the new replica's length is shorter
> than the NN recorded block length, the new replica could be marked as
> corrupt; if the new replica's length is longer, NN should update its recorded
> block length. But it should not mark existing replicas as corrupt. This is
> because NN recorded length for an under construction block does not
> accurately match the block length on datanode disk. NN should not judge an
> under construction replica to be corrupt by looking at the inaccurate
> information: its recorded block length.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.