[ https://issues.apache.org/jira/browse/HDFS-3605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13413511#comment-13413511 ]
Todd Lipcon commented on HDFS-3605: ----------------------------------- Hey Uma. I think we should separate the discussion of a potential optimization from the discussion of fixing this bug. Do you already have a patch for the bug? If not, I'll make one following the approach described above. Once the bug fix is in, we can talk about how to optimize the memory usage. > Block mistakenly marked corrupt during edit log catchup phase of failover > ------------------------------------------------------------------------- > > Key: HDFS-3605 > URL: https://issues.apache.org/jira/browse/HDFS-3605 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha, name-node > Affects Versions: 2.0.0-alpha, 2.0.1-alpha > Reporter: Brahma Reddy Battula > Assignee: Todd Lipcon > Attachments: TestAppendBlockMiss.java > > > Open file for append > Write data and sync. > After next log roll and editlog tailing in standbyNN close the append stream. > Call append multiple times on the same file, before next editlog roll. > Now abruptly kill the current active namenode. > Here block is missed.. > this may be because of All latest blocks were queued in StandBy Namenode. > During failover, first OP_CLOSE was processing the pending queue and adding > the block to corrupted block. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira