[
https://issues.apache.org/jira/browse/HADOOP-4663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12668204#action_12668204
]
dhruba borthakur commented on HADOOP-4663:
------------------------------------------
Hairong's proposal:
DataNode leaves the blocks under tmp untouched at the startup time. Instead it
leaves those blocks for the lease recovery process to prompt them. When a
DataNode starts up, it reads blocks under tmp and put them to OngoingCreates
data structure. A block report does not contain this block, but it is ok. The
NN block report processing always ignores the last block of a file under
construction.
> Datanode should delete files under tmp when upgraded from 0.17
> --------------------------------------------------------------
>
> Key: HADOOP-4663
> URL: https://issues.apache.org/jira/browse/HADOOP-4663
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.18.0
> Reporter: Raghu Angadi
> Assignee: dhruba borthakur
> Priority: Blocker
> Fix For: 0.19.1
>
> Attachments: deleteTmp.patch, deleteTmp2.patch, deleteTmp_0.18.patch,
> handleTmp1.patch
>
>
> Before 0.18, when Datanode restarts, it deletes files under data-dir/tmp
> directory since these files are not valid anymore. But in 0.18 it moves these
> files to normal directory incorrectly making them valid blocks. One of the
> following would work :
> - remove the tmp files during upgrade, or
> - if the files under /tmp are in pre-18 format (i.e. no generation), delete
> them.
> Currently effect of this bug is that, these files end up failing block
> verification and eventually get deleted. But cause incorrect over-replication
> at the namenode before that.
> Also it looks like our policy regd treating files under tmp needs to be
> defined better. Right now there are probably one or two more bugs with it.
> Dhruba, please file them if you rememeber.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.