[ https://issues.apache.org/jira/browse/HDFS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12974017#action_12974017 ]
M. C. Srivas commented on HDFS-1539: ------------------------------------ Dhruba, so if there's a file with 20 blocks on 20 different servers, with 3 replicas each, we might potentially end up sync'ing 41 servers (= 1 primary + 20*2 replicas) when closing the file, correct? > prevent data loss when a cluster suffers a power loss > ----------------------------------------------------- > > Key: HDFS-1539 > URL: https://issues.apache.org/jira/browse/HDFS-1539 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node, hdfs client, name-node > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Attachments: syncOnClose1.txt > > > we have seen an instance where a external outage caused many datanodes to > reboot at around the same time. This resulted in many corrupted blocks. > These were recently written blocks; the current implementation of HDFS > Datanodes do not sync the data of a block file when the block is closed. > 1. Have a cluster-wide config setting that causes the datanode to sync a > block file when a block is finalized. > 2. Introduce a new parameter to the FileSystem.create() to trigger the new > behaviour, i.e. cause the datanode to sync a block-file when it is finalized. > 3. Implement the FSDataOutputStream.hsync() to cause all data written to the > specified file to be written to stable storage. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.