[ 
https://issues.apache.org/jira/browse/HADOOP-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12710542#action_12710542
 ] 

Hairong Kuang commented on HADOOP-5744:
---------------------------------------

Note this jira no longer uses "sync", instead we use hflush.

The spec posted in this jira aims at API 3. 

> another process knows that the writer has crashed and needs to be able to 
> read all the data up to the last sync()

As I said, this spec guarantees that another process read all the data up to 
the last flush without the need to close this file. We also guarantee that the 
flushed data won't be removed as a result of lease recovery when the file is 
finally closed.

> recover the lease (immediately)
If another process can read all the flushed data without the file being closed, 
do you still need to recover the lease?

> this is what HADOOP-4379 is trying to address with Doug Cutting's comment.
I might be wrong. But I do not think HADOOP-4379 addresses the problem that I 
raised. HADOOP-4379 tries to make flushed data visible to a new reader.
 


> Revisit append
> --------------
>
>                 Key: HADOOP-5744
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5744
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>    Affects Versions: 0.20.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.21.0
>
>         Attachments: AppendSpec.pdf
>
>
> HADOOP-1700 and related issues have put a lot of efforts to provide the first 
> implementation of append. However, append is such a complex feature. It turns 
> out that there are issues that were initially seemed trivial but needs a 
> careful design. This jira revisits append, aiming for a design and 
> implementation supporting a semantics that are acceptable to its users.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to