[ 
https://issues.apache.org/jira/browse/SPARK-20217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-20217:
------------------------------------

    Assignee: Apache Spark

> Executor should not fail stage if killed task throws non-interrupted exception
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-20217
>                 URL: https://issues.apache.org/jira/browse/SPARK-20217
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.2.0
>            Reporter: Eric Liang
>            Assignee: Apache Spark
>
> This is reproducible as follows. Run the following, and then use 
> SparkContext.killTaskAttempt to kill one of the tasks. The entire stage will 
> fail since we threw a RuntimeException instead of InterruptedException.
> We should probably unconditionally return TaskKilled instead of TaskFailed if 
> the task was killed by the driver, regardless of the actual exception thrown.
> {code}
> spark.range(100).repartition(100).foreach { i =>
>   try {
>     Thread.sleep(10000000)
>   } catch {
>     case t: InterruptedException =>
>       throw new RuntimeException(t)
>   }
> }
> {code}
> Based on the code in TaskSetManager, I think this also affects kills of 
> speculative tasks. However, since the number of speculated tasks is few, and 
> usually you need to fail a task a few times before the stage is cancelled, 
> probably no-one noticed this in production.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to