[ 
https://issues.apache.org/jira/browse/SPARK-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Rosen resolved SPARK-5191.
-------------------------------
    Resolution: Not a Problem

I'm going to resolve this as Not a Problem since the problem here lies with the 
user code and not Spark itself (we might be able to fix this, but we can't 
guarantee that invalid user programs will work correctly).

> Pyspark: scheduler hangs when importing a standalone pyspark app
> ----------------------------------------------------------------
>
>                 Key: SPARK-5191
>                 URL: https://issues.apache.org/jira/browse/SPARK-5191
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, Scheduler
>    Affects Versions: 1.0.2, 1.1.1, 1.3.0, 1.2.1
>            Reporter: Daniel Liu
>
> In a.py:
> {code}
> from pyspark import SparkContext
> sc = SparkContext("local", "test spark")
> rdd = sc.parallelize(range(1, 10))
> print rdd.count()
> {code}
> In b.py:
> {code}
> from a import *
> {code}
> {{python a.py}} runs fine
> {{python b.py}} will hang at TaskSchedulerImpl: Removed TaskSet 0.0, whose 
> tasks have all completed, from pool
> {{./bin/spark-submit --py-files a.py b.py}} has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to