[ 
https://issues.apache.org/jira/browse/FLUME-2922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375732#comment-15375732
 ] 

Hari Shreedharan commented on FLUME-2922:
-----------------------------------------

No issues at all [~mpercy]. I have not had the time recently to do reviews. 
[This|https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py] is the 
script that Spark uses to merge and 
[this|https://github.com/apache/spark/blob/master/dev/github_jira_sync.py] is 
the one to link the PR to jira.

> HDFSSequenceFile Should Sync Writer
> -----------------------------------
>
>                 Key: FLUME-2922
>                 URL: https://issues.apache.org/jira/browse/FLUME-2922
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: v1.6.0
>            Reporter: Kevin Conaway
>            Priority: Critical
>         Attachments: FLUME-2922.patch
>
>
> There is a possibility of losing data with the current HDFS sequence file 
> writer.
> Internally, the `SequenceFile.Writer` buffers data and periodically syncs it 
> to the underlying output stream.  The mechanism for doing this is dependent 
> on whether you are using compression or not but in both scenarios, the 
> key/values are appended to an internal buffer and only flushed to disk after 
> the buffer reaches a certain size.
> Thus it is quite possible for Flume to lose messages if the agent crashes, or 
> is stopped, before the internal buffer is flushed to disk.
> The correct action is to force the writer to sync its internal buffers to the 
> underlying `FSDataOutputStream` first before calling hflush/sync.
> Additionally, I believe we should be calling hsync instead of hflush.  Its my 
> understanding writes with hsync should be more durable which I believe are 
> the semantics we want here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to