[ https://issues.apache.org/jira/browse/HDFS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12975393#action_12975393 ]
Hadoop QA commented on HDFS-1539: --------------------------------- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12467019/syncOnClose2.txt against trunk revision 1053203. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore org.apache.hadoop.hdfs.TestFileConcurrentReader -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/49//testReport/ Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/49//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/49//console This message is automatically generated. > prevent data loss when a cluster suffers a power loss > ----------------------------------------------------- > > Key: HDFS-1539 > URL: https://issues.apache.org/jira/browse/HDFS-1539 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node, hdfs client, name-node > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Attachments: syncOnClose1.txt, syncOnClose2.txt > > > we have seen an instance where a external outage caused many datanodes to > reboot at around the same time. This resulted in many corrupted blocks. > These were recently written blocks; the current implementation of HDFS > Datanodes do not sync the data of a block file when the block is closed. > 1. Have a cluster-wide config setting that causes the datanode to sync a > block file when a block is finalized. > 2. Introduce a new parameter to the FileSystem.create() to trigger the new > behaviour, i.e. cause the datanode to sync a block-file when it is finalized. > 3. Implement the FSDataOutputStream.hsync() to cause all data written to the > specified file to be written to stable storage. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.