[ https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555024#comment-14555024 ]
Jing Zhao commented on HDFS-7609: --------------------------------- Thanks for the response, Ming! Yes I agree that in most of the cases the call should be blocked after checking the {{OperationCategory}}, and the standbyexception will be thrown. But looks like we still cannot 100% rule out the scenario that this check happens after the transition? This scenario should be extremely rare though. > startup used too much time to load edits > ---------------------------------------- > > Key: HDFS-7609 > URL: https://issues.apache.org/jira/browse/HDFS-7609 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Affects Versions: 2.2.0 > Reporter: Carrey Zhan > Assignee: Ming Ma > Labels: BB2015-05-RFC > Attachments: HDFS-7609-CreateEditsLogWithRPCIDs.patch, > HDFS-7609.patch, recovery_do_not_use_retrycache.patch > > > One day my namenode crashed because of two journal node timed out at the same > time under very high load, leaving behind about 100 million transactions in > edits log.(I still have no idea why they were not rolled into fsimage.) > I tryed to restart namenode, but it showed that almost 20 hours would be > needed before finish, and it was loading fsedits most of the time. I also > tryed to restart namenode in recover mode, the loading speed had no different. > I looked into the stack trace, judged that it is caused by the retry cache. > So I set dfs.namenode.enable.retrycache to false, the restart process > finished in half an hour. > I think the retry cached is useless during startup, at least during recover > process. -- This message was sent by Atlassian JIRA (v6.3.4#6332)