[
https://issues.apache.org/jira/browse/HIVE-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12910834#action_12910834
]
Ning Zhang commented on HIVE-1651:
----------------------------------
@joydeep, the output file will not be committed if an exception occurred and
close(abort=true) is called. This bug happened in a short time window after the
exception occurred and before the close(abort) is called. Although the file got
deleted, the dynamic partition insert already created a directory which later
will be considered as an empty partition.
> ScriptOperator should not forward any output to downstream operators if an
> exception is happened
> ------------------------------------------------------------------------------------------------
>
> Key: HIVE-1651
> URL: https://issues.apache.org/jira/browse/HIVE-1651
> Project: Hadoop Hive
> Issue Type: Bug
> Reporter: Ning Zhang
> Assignee: Ning Zhang
> Attachments: HIVE-1651.patch
>
>
> ScriptOperator spawns 2 threads for getting the stdout and stderr from the
> script and then forward the output from stdout to downstream operators. In
> case of any exceptions to the script (e.g., got killed), the ScriptOperator
> got an exception and throw it to upstream operators until MapOperator got it
> and call close(abort). Before the ScriptOperator.close() is called the script
> output stream can still forward output to downstream operators. We should
> terminate it immediately.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.