[ https://issues.apache.org/jira/browse/HDFS-3605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13416862#comment-13416862 ]
Uma Maheswara Rao G commented on HDFS-3605: ------------------------------------------- Hi Todd, sorry for the late reply on this. Stucked in someother work yesterday. Patch looks great to me. +1 on addressing a small nit. {code} + public void setPostponeInvalidBlockReports(boolean postpone) { + this.shouldPostponeBlocksFromFuture = postpone; + } {code} forgot to update method name? variable and method names looks different. I also ran some tests with this change by adding some debug points. Worked well for me. Thanks, Uma > Block mistakenly marked corrupt during edit log catchup phase of failover > ------------------------------------------------------------------------- > > Key: HDFS-3605 > URL: https://issues.apache.org/jira/browse/HDFS-3605 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha, name-node > Affects Versions: 2.0.0-alpha, 2.1.0-alpha > Reporter: Brahma Reddy Battula > Assignee: Todd Lipcon > Attachments: HDFS-3605.patch, TestAppendBlockMiss.java, > hdfs-3605.txt, hdfs-3605.txt > > > Open file for append > Write data and sync. > After next log roll and editlog tailing in standbyNN close the append stream. > Call append multiple times on the same file, before next editlog roll. > Now abruptly kill the current active namenode. > Here block is missed.. > this may be because of All latest blocks were queued in StandBy Namenode. > During failover, first OP_CLOSE was processing the pending queue and adding > the block to corrupted block. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira