[
https://issues.apache.org/jira/browse/HADOOP-4663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12661802#action_12661802
]
Konstantin Shvachko commented on HADOOP-4663:
---------------------------------------------
If you don't throw an exception from fsync() then there is now api change. In
this case fsync() will work if data-nodes/clients don't fail it's just some
sync-ed data may not survive cluster restarts. So hbase people will be able to
write there programs with fsync(), but it will be guaranteed to work when they
upgrade to newer versions where this issue is going to be fixed.
> Datanode should delete files under tmp when upgraded from 0.17
> --------------------------------------------------------------
>
> Key: HADOOP-4663
> URL: https://issues.apache.org/jira/browse/HADOOP-4663
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.18.0
> Reporter: Raghu Angadi
> Assignee: dhruba borthakur
> Priority: Blocker
> Fix For: 0.18.3
>
> Attachments: deleteTmp.patch, deleteTmp2.patch, deleteTmp_0.18.patch
>
>
> Before 0.18, when Datanode restarts, it deletes files under data-dir/tmp
> directory since these files are not valid anymore. But in 0.18 it moves these
> files to normal directory incorrectly making them valid blocks. One of the
> following would work :
> - remove the tmp files during upgrade, or
> - if the files under /tmp are in pre-18 format (i.e. no generation), delete
> them.
> Currently effect of this bug is that, these files end up failing block
> verification and eventually get deleted. But cause incorrect over-replication
> at the namenode before that.
> Also it looks like our policy regd treating files under tmp needs to be
> defined better. Right now there are probably one or two more bugs with it.
> Dhruba, please file them if you rememeber.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.