[ 
https://issues.apache.org/jira/browse/HDFS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12973066#action_12973066
 ] 

dhruba borthakur commented on HDFS-1539:
----------------------------------------

@Allen: Thanks for ur comments. I jave kept the default behaviour as it is now, 
especially because I do not want any existing installations to see bad 
performance behaviour when they run with this  patch. (On some customer sites, 
it is possible that they have enough redundant power supplies that they never 
have to configure this patch to be turned on)

> prevent data loss when a cluster suffers a power loss
> -----------------------------------------------------
>
>                 Key: HDFS-1539
>                 URL: https://issues.apache.org/jira/browse/HDFS-1539
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node, hdfs client, name-node
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: syncOnClose1.txt
>
>
> we have seen an instance where a external outage caused many datanodes to 
> reboot at around the same time.  This resulted in many corrupted blocks. 
> These were recently written blocks; the current implementation of HDFS 
> Datanodes do not sync the data of a block file when the block is closed.
> 1. Have a cluster-wide config setting that causes the datanode to sync a 
> block file when a block is finalized.
> 2. Introduce a new parameter to the FileSystem.create() to trigger the new 
> behaviour, i.e. cause the datanode to sync a block-file when it is finalized.
> 3. Implement the FSDataOutputStream.hsync() to cause all data written to the 
> specified file to be written to stable storage.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to