[ https://issues.apache.org/jira/browse/HDFS-3605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13410138#comment-13410138 ]
Uma Maheswara Rao G commented on HDFS-3605: ------------------------------------------- Note that, In this case we have 2 options - we have to maintain only recently reoprted blocks on QueuedDNMessages(with latest genstamp). we have to think the impacts of this, if any scenario we are missing. Currently we have opted this for our work around. (or) - process the current genstamp then and there and postpone higher genstamp blcoks again here. Otherwise if we just postpone everything till all edits loading, then the older genstamp blocks will create issue as genstamp already might have updated to higher in blocksMap while loading all edits, So, still block may get marked as corrupt. > Missing Block in following scenario > ----------------------------------- > > Key: HDFS-3605 > URL: https://issues.apache.org/jira/browse/HDFS-3605 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 2.0.0-alpha, 2.0.1-alpha > Reporter: Brahma Reddy Battula > Assignee: Todd Lipcon > Attachments: TestAppendBlockMiss.java > > > Open file for append > Write data and sync. > After next log roll and editlog tailing in standbyNN close the append stream. > Call append multiple times on the same file, before next editlog roll. > Now abruptly kill the current active namenode. > Here block is missed.. > this may be because of All latest blocks were queued in StandBy Namenode. > During failover, first OP_CLOSE was processing the pending queue and adding > the block to corrupted block. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira