[ https://issues.apache.org/jira/browse/HDFS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12977309#action_12977309 ]
Hairong Kuang commented on HDFS-1539: ------------------------------------- +1. The patch look good. A minor comment is that I do not think the unit test is of much use because the bug occurs when a machine is power off but it's hard to simulate this. > prevent data loss when a cluster suffers a power loss > ----------------------------------------------------- > > Key: HDFS-1539 > URL: https://issues.apache.org/jira/browse/HDFS-1539 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node, hdfs client, name-node > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Attachments: syncOnClose1.txt, syncOnClose2.txt > > > we have seen an instance where a external outage caused many datanodes to > reboot at around the same time. This resulted in many corrupted blocks. > These were recently written blocks; the current implementation of HDFS > Datanodes do not sync the data of a block file when the block is closed. > 1. Have a cluster-wide config setting that causes the datanode to sync a > block file when a block is finalized. > 2. Introduce a new parameter to the FileSystem.create() to trigger the new > behaviour, i.e. cause the datanode to sync a block-file when it is finalized. > 3. Implement the FSDataOutputStream.hsync() to cause all data written to the > specified file to be written to stable storage. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.