HA: failover does not succeed if prior NN died just after creating an edit log 
segment
--------------------------------------------------------------------------------------

                 Key: HDFS-2824
                 URL: https://issues.apache.org/jira/browse/HDFS-2824
             Project: Hadoop HDFS
          Issue Type: Sub-task
          Components: ha, name-node
    Affects Versions: HA branch (HDFS-1623)
            Reporter: Todd Lipcon
            Assignee: Todd Lipcon


In stress testing failover, I had the following failure:
- NN1 rolls edit logs and starts writing edits_inprogress_1000
- NN1 crashes before writing the START_LOG_SEGMENT transaction
- NN2 tries to become active, and calls {{recoverUnfinalizedSegment}}. Since 
the log file contains no valid transactions, it is marked as corrupt and 
renamed with the {{.corrupt}} suffix
- The sanity check in {{openLogsForWrite}} will refuse to open a new 
in-progress log at the same txid. Failover does not proceed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to