[ 
https://issues.apache.org/jira/browse/SPARK-1498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiangrui Meng updated SPARK-1498:
---------------------------------

    Target Version/s: 0.9.3

> Spark can hang if pyspark tasks fail
> ------------------------------------
>
>                 Key: SPARK-1498
>                 URL: https://issues.apache.org/jira/browse/SPARK-1498
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 0.9.0
>            Reporter: Kay Ousterhout
>
> In pyspark, when some kinds of jobs fail, Spark hangs rather than returning 
> an error.  This is partially a scheduler problem -- the scheduler sometimes 
> thinks failed tasks succeed, even though they have a stack trace and 
> exception.
> You can reproduce this problem with:
> ardd = sc.parallelize([(1,2,3), (4,5,6)])
> brdd = sc.parallelize([(1,2,6), (4,5,9)])
> ardd.join(brdd).count()
> The last line will run forever (the problem in this code is that the RDD 
> entries have 3 values instead of the expected 2).  I haven't verified if this 
> is a problem for 1.0 as well as 0.9.
> Thanks to Shivaram for helping diagnose this issue!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to