[ 
https://issues.apache.org/jira/browse/SPARK-1235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matei Zaharia updated SPARK-1235:
---------------------------------

    Affects Version/s:     (was: 1.0.0)

> DAGScheduler ignores exceptions thrown in handleTaskCompletion
> --------------------------------------------------------------
>
>                 Key: SPARK-1235
>                 URL: https://issues.apache.org/jira/browse/SPARK-1235
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 0.9.0, 0.9.1
>            Reporter: Kay Ousterhout
>            Assignee: Nan Zhu
>            Priority: Blocker
>             Fix For: 1.0.0
>
>
> If an exception gets thrown in the handleTaskCompletion method, the method 
> exits, but the exception is caught somewhere (not clear where) and the 
> DAGScheduler keeps running.  Jobs hang as a result -- because not all of the 
> task completion code gets run.
> This was first reported by Brad Miller on the mailing list: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Fwd-pyspark-crash-on-mesos-td2256.html
>  and this behavior seems to have changed since 0.8 (when, based on Brad's 
> description, it sounds like an exception in handleTaskCompletion would cause 
> the DAGScheduler to crash), suggesting that this may be related to the Scala 
> 2.10.3.
> To reproduce this problem, add "throw new Exception("foo")" anywhere in 
> handleTaskCompletion and run any job locally.  The job will hang and you can 
> see the exception get printed in the logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to