[
https://issues.apache.org/jira/browse/HADOOP-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12710541#action_12710541
]
stack commented on HADOOP-5744:
-------------------------------
@ Hairong
.bq My question is that why the file needs to be closed before it is read.
It doesn't have to be closed as long the reader is able to go all the ways up
to the last sync made by the writer before crash.
.bq Is it OK for your client to trigger the close of the file but does not wait
for it to close?
Yes, as long as the reader is able to go all the ways up to the last sync
made....
.bq (1) may have more bytes than when it was previously read. This is a norm
case. Will this be an issue to hbase?
How would this circumstance arise?
.bq Note that the current implementation in 0.20 does not provide the second
guarantee described above.
In my testing of hadoop-4379, i've only been killing the writer application. I
should play with killing the writer application AND the local datanode.
> Revisit append
> --------------
>
> Key: HADOOP-5744
> URL: https://issues.apache.org/jira/browse/HADOOP-5744
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Affects Versions: 0.20.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.21.0
>
> Attachments: AppendSpec.pdf
>
>
> HADOOP-1700 and related issues have put a lot of efforts to provide the first
> implementation of append. However, append is such a complex feature. It turns
> out that there are issues that were initially seemed trivial but needs a
> careful design. This jira revisits append, aiming for a design and
> implementation supporting a semantics that are acceptable to its users.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.