[ 
https://issues.apache.org/jira/browse/HDFS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12971895#action_12971895
 ] 

dhruba borthakur commented on HDFS-1539:
----------------------------------------

We have seen this problem on a cluster that is purely used for archival 
purposes. I propose that we implement Option 1 listed above.

> prevent data loss when a cluster suffers a power loss
> -----------------------------------------------------
>
>                 Key: HDFS-1539
>                 URL: https://issues.apache.org/jira/browse/HDFS-1539
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node, hdfs client, name-node
>            Reporter: dhruba borthakur
>
> we have seen an instance where a external outage caused many datanodes to 
> reboot at around the same time.  This resulted in many corrupted blocks. 
> These were recently written blocks; the current implementation of HDFS 
> Datanodes do not sync the data of a block file when the block is closed.
> 1. Have a cluster-wide config setting that causes the datanode to sync a 
> block file when a block is finalized.
> 2. Introduce a new parameter to the FileSystem.create() to trigger the new 
> behaviour, i.e. cause the datanode to sync a block-file when it is finalized.
> 3. Implement the FSDataOutputStream.hsync() to cause all data written to the 
> specified file to be written to stable storage.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to