[ 
https://issues.apache.org/jira/browse/HDFS-3605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13413690#comment-13413690
 ] 

Uma Maheswara Rao G commented on HDFS-3605:
-------------------------------------------

Hi Todd, 
 
 Attached a patch which I am thiking of currently.
 
{quote}
 I think we should separate the discussion of a potential optimization from the 
discussion of fixing this bug.
{quote}
Sure. Infact that is not an optimization, its required in the approach which I 
am thinking.

Please take a look and correct me, if I miss something in the patch as you and 
Jitendra involved in that changes mainly.

Thanks a lot,

Uma
                
> Block mistakenly marked corrupt during edit log catchup phase of failover
> -------------------------------------------------------------------------
>
>                 Key: HDFS-3605
>                 URL: https://issues.apache.org/jira/browse/HDFS-3605
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: ha, name-node
>    Affects Versions: 2.0.0-alpha, 2.0.1-alpha
>            Reporter: Brahma Reddy Battula
>            Assignee: Todd Lipcon
>         Attachments: HDFS-3605.patch, TestAppendBlockMiss.java
>
>
> Open file for append
> Write data and sync.
> After next log roll and editlog tailing in standbyNN close the append stream.
> Call append multiple times on the same file, before next editlog roll.
> Now abruptly kill the current active namenode.
> Here block is missed..
> this may be because of All latest blocks were queued in StandBy Namenode. 
> During failover, first OP_CLOSE was processing the pending queue and adding 
> the block to corrupted block. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to