[
https://issues.apache.org/jira/browse/STORM-969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14908607#comment-14908607
]
ASF GitHub Bot commented on STORM-969:
--------------------------------------
Github user dossett commented on the pull request:
https://github.com/apache/storm/pull/664#issuecomment-143341056
@harshach (or other commiters) do you have feedback about this PR?
Anecdotally, this has been very useful to us in production. We had an HDFS
restart, which created the exact situation I tested with (failed writes due to
HDFS safe mode) but the bolt recovered without a topology restart.
> HDFS Bolt can end up in an unrecoverable state
> ----------------------------------------------
>
> Key: STORM-969
> URL: https://issues.apache.org/jira/browse/STORM-969
> Project: Apache Storm
> Issue Type: Improvement
> Components: storm-hdfs
> Reporter: Aaron Dossett
> Assignee: Aaron Dossett
>
> The body of the HDFSBolt.execute() method is essentially one try-catch block.
> The catch block reports the error and fails the current tuple. In some
> cases the bolt's FSDataOutputStream object (named 'out') is in an
> unrecoverable state and no subsequent calls to execute() can succeed.
> To produce this scenario:
> - process some tuples through HDFS bolt
> - put the underlying HDFS system into safemode
> - process some more tuples and receive a correct ClosedChannelException
> - take the underlying HDFS system out of safemode
> - subsequent tuples continue to fail with the same exception
> The three fundamental operations that execute takes (writing, sync'ing,
> rotating) need to be isolated so that errors from each are specifically
> handled.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)