[ 
https://issues.apache.org/jira/browse/SPARK-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Rosen resolved SPARK-5363.
-------------------------------
       Resolution: Fixed
    Fix Version/s: 1.4.0
                   1.2.2
                   1.3.0

I've merged https://github.com/apache/spark/pull/4776, which fixes one of our 
reproductions of this issue (our job added and removed broadcast variables in a 
way that might trigger this bug fixed by that patch).  Therefore, I'm going to 
mark this issue as "Resolved", but please comment here if you still observe the 
issue after this latest patch.

> Spark 1.2 freeze without error notification
> -------------------------------------------
>
>                 Key: SPARK-5363
>                 URL: https://issues.apache.org/jira/browse/SPARK-5363
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.2.0, 1.3.0, 1.2.1
>            Reporter: Tassilo Klein
>            Assignee: Davies Liu
>            Priority: Blocker
>             Fix For: 1.3.0, 1.2.2, 1.4.0
>
>
> After a number of calls to a map().collect() statement Spark freezes without 
> reporting any error.  Within the map a large broadcast variable is used.
> The freezing can be avoided by setting 'spark.python.worker.reuse = false' 
> (Spark 1.2) or using an earlier version, however, at the prize of low speed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to