[
https://issues.apache.org/jira/browse/HIVE-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12910786#action_12910786
]
Joydeep Sen Sarma commented on HIVE-1651:
-----------------------------------------
if a hadoop task is being failed - how is it that any side effect files created
by hive code running in that task are getting promoted to the final output?
i think the forwarding is a red-herring. we should not commit output files from
a failed task.
> ScriptOperator should not forward any output to downstream operators if an
> exception is happened
> ------------------------------------------------------------------------------------------------
>
> Key: HIVE-1651
> URL: https://issues.apache.org/jira/browse/HIVE-1651
> Project: Hadoop Hive
> Issue Type: Bug
> Reporter: Ning Zhang
> Assignee: Ning Zhang
> Attachments: HIVE-1651.patch
>
>
> ScriptOperator spawns 2 threads for getting the stdout and stderr from the
> script and then forward the output from stdout to downstream operators. In
> case of any exceptions to the script (e.g., got killed), the ScriptOperator
> got an exception and throw it to upstream operators until MapOperator got it
> and call close(abort). Before the ScriptOperator.close() is called the script
> output stream can still forward output to downstream operators. We should
> terminate it immediately.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.