[ https://issues.apache.org/jira/browse/HDFS-826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12853959#action_12853959 ]
Hudson commented on HDFS-826: ----------------------------- Integrated in Hadoop-Hdfs-trunk #275 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/]) > Allow a mechanism for an application to detect that datanode(s) have died in > the write pipeline > ------------------------------------------------------------------------------------------------ > > Key: HDFS-826 > URL: https://issues.apache.org/jira/browse/HDFS-826 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs client > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Fix For: 0.22.0 > > Attachments: HDFS-826-0.20-v2.patch, HDFS-826-0.20.patch, > Replicable4.txt, ReplicableHdfs.txt, ReplicableHdfs2.txt, ReplicableHdfs3.txt > > > HDFS does not replicate the last block of the file that is being currently > written to by an application. Every datanode death in the write pipeline > decreases the reliability of the last block of the currently-being-written > block. This situation can be improved if the application can be notified of a > datanode death in the write pipeline. Then, the application can decide what > is the right course of action to be taken on this event. > In our use-case, the application can close the file on the first datanode > death, and start writing to a newly created file. This ensures that the > reliability guarantee of a block is close to 3 at all time. > One idea is to make DFSOutoutStream. write() throw an exception if the number > of datanodes in the write pipeline fall below minimum.replication.factor that > is set on the client (this is backward compatible). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.