[
https://issues.apache.org/jira/browse/HADOOP-5729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12702637#action_12702637
]
Wang Xu commented on HADOOP-5729:
---------------------------------
> > AFAIK processIOError() intends to deal with errors of editStreams and
> > remove bad ones
>
> Correct. And also shutdown the node if there are no streams remained.
> It does it in the first line.
Thus we should call processIOError() with a trivial argument at the end of
open() if no editStream is opened correctly. is this OK?
> FSEditLog.open should stop going on if cannot open any directory
> ----------------------------------------------------------------
>
> Key: HADOOP-5729
> URL: https://issues.apache.org/jira/browse/HADOOP-5729
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.19.1
> Environment: CentOS 5.2, jdk 1.6, hadoop 0.19.1
> Reporter: Wang Xu
> Assignee: Wang Xu
> Fix For: 0.19.2
>
> Attachments: fseditlog-open.patch
>
> Original Estimate: 1h
> Remaining Estimate: 1h
>
> FSEditLog.open will be invoked when SecondaryNameNode doCheckPoint,
> If no dir is opened successfully, it only prints some WARN messages in log,
> and goes on running.
> However, it causes the editStreams becomes empty and cannot by synced
> in. And if editStreams were decreased to 0 when exceptions occured during
> logsync, NameNode would print FATAL log message and halt itself. Hence,
> we think it should also stopped itself at that time.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.