[ https://issues.apache.org/jira/browse/HDFS-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Konstantin Shvachko updated HDFS-3368: -------------------------------------- Attachment: blockDeletePolicy.patch I am working on a unit test. This is for a preview of the change. Nicholas, yes if the blocks can be manually copied from flaky nodes, then the data is not completely lost. Todd, if you want to double "minus heartbeat interval", please get some motivation for that. I mean why not triple or 1.5 x. > Missing blocks due to bad DataNodes comming up and down. > -------------------------------------------------------- > > Key: HDFS-3368 > URL: https://issues.apache.org/jira/browse/HDFS-3368 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 0.22.0, 1.0.0, 2.0.0, 3.0.0 > Reporter: Konstantin Shvachko > Assignee: Konstantin Shvachko > Attachments: blockDeletePolicy.patch > > > All replicas of a block can be removed if bad DataNodes come up and down > during cluster restart resulting in data loss. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira