[
https://issues.apache.org/jira/browse/HIVE-8956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14225763#comment-14225763
]
Rui Li commented on HIVE-8956:
------------------------------
Yeah I created HIVE-8972 as a follow up.
As for retry failed job, I think it'd be better if we can get what caused the
failure. E.g. in case of non-deserailizable failures, it doesn't make sense to
retry it.
[~vanzin] do you know if there's any way we can catch such failures (beyond job
execution) and send it back to client?
> Hive hangs while some error/exception happens beyond job execution [Spark
> Branch]
> ---------------------------------------------------------------------------------
>
> Key: HIVE-8956
> URL: https://issues.apache.org/jira/browse/HIVE-8956
> Project: Hive
> Issue Type: Sub-task
> Components: Spark
> Reporter: Chengxiang Li
> Assignee: Rui Li
> Labels: Spark-M3
> Fix For: spark-branch
>
> Attachments: HIVE-8956.1-spark.patch
>
>
> Remote spark client communicate with remote spark context asynchronously, if
> error/exception is throw out during job execution in remote spark context, it
> would be wrapped and send back to remote spark client, but if error/exception
> is throw out beyond job execution, such as job serialized failed, remote
> spark client would never know what's going on in remote spark context, and it
> would hangs there.
> Set a timeout in remote spark client side may not a great idea, as we are not
> sure how long the query executed in spark cluster. we need find a way to
> check whether job has failed(whole life cycle) in remote spark context.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)