[ 
https://issues.apache.org/jira/browse/SPARK-1582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell resolved SPARK-1582.
------------------------------------

       Resolution: Fixed
    Fix Version/s: 1.0.0

> Job cancellation does not interrupt threads
> -------------------------------------------
>
>                 Key: SPARK-1582
>                 URL: https://issues.apache.org/jira/browse/SPARK-1582
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.0.0, 0.9.1
>            Reporter: Aaron Davidson
>            Assignee: Aaron Davidson
>             Fix For: 1.0.0
>
>
> Cancelling Spark jobs is limited because executors that are blocked are not 
> interrupted. In effect, the cancellation will succeed and the job will no 
> longer be "running", but executor threads may still be tied up with the 
> cancelled job and unable to do further work until complete. This is 
> particularly problematic in the case of deadlock or unlimited/long timeouts.
> It would be useful if cancelling a job would call Thread.interrupt() in order 
> to interrupt blocking in most situations, such as Object monitors or IO. The 
> one caveat is [HDFS-1208|https://issues.apache.org/jira/browse/HDFS-1208], 
> where HDFS's DFSClient will not only swallow InterruptedException but may 
> reinterpret them as IOException, causing HDFS to mark a node as permanently 
> failed. Thus, this feature must be optional and probably off by default.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to