[jira] [Commented] (SPARK-25921) Python worker reuse causes Barrier tasks to run without BarrierTaskContext

2018-11-07 Thread Apache Spark (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-25921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677959#comment-16677959
 ] 

Apache Spark commented on SPARK-25921:
--

User 'xuanyuanking' has created a pull request for this issue:
https://github.com/apache/spark/pull/22962

> Python worker reuse causes Barrier tasks to run without BarrierTaskContext
> --
>
> Key: SPARK-25921
> URL: https://issues.apache.org/jira/browse/SPARK-25921
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, Spark Core
>Affects Versions: 2.4.0
>Reporter: Bago Amirbekian
>Priority: Critical
>
> Running a barrier job after a normal spark job causes the barrier job to run 
> without a BarrierTaskContext. Here is some code to reproduce.
>  
> {code:java}
> def task(*args):
>   from pyspark import BarrierTaskContext
>   context = BarrierTaskContext.get()
>   context.barrier()
>   print("in barrier phase")
>   context.barrier()
>   return []
> a = sc.parallelize(list(range(4))).map(lambda x: x ** 2).collect()
> assert a == [0, 1, 4, 9]
> b = sc.parallelize(list(range(4)), 4).barrier().mapPartitions(task).collect()
> {code}
>  
> Here is some of the trace
> {code:java}
> Py4JJavaError: An error occurred while calling 
> z:org.apache.spark.api.python.PythonRDD.collectAndServe.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Could 
> not recover from a failed barrier ResultStage. Most recent failure reason: 
> Stage failed because barrier task ResultTask(14, 0) finished unsuccessfully.
> org.apache.spark.api.python.PythonException: Traceback (most recent call 
> last):
>   File "/databricks/spark/python/pyspark/worker.py", line 372, in main
> process()
>   File "/databricks/spark/python/pyspark/worker.py", line 367, in process
> serializer.dump_stream(func(split_index, iterator), outfile)
>   File "/databricks/spark/python/pyspark/rdd.py", line 2482, in func
> return f(iterator)
>   File "", line 4, in task
> AttributeError: 'TaskContext' object has no attribute 'barrier'
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-25921) Python worker reuse causes Barrier tasks to run without BarrierTaskContext

2018-11-07 Thread Apache Spark (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-25921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677954#comment-16677954
 ] 

Apache Spark commented on SPARK-25921:
--

User 'xuanyuanking' has created a pull request for this issue:
https://github.com/apache/spark/pull/22962

> Python worker reuse causes Barrier tasks to run without BarrierTaskContext
> --
>
> Key: SPARK-25921
> URL: https://issues.apache.org/jira/browse/SPARK-25921
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, Spark Core
>Affects Versions: 2.4.0
>Reporter: Bago Amirbekian
>Priority: Critical
>
> Running a barrier job after a normal spark job causes the barrier job to run 
> without a BarrierTaskContext. Here is some code to reproduce.
>  
> {code:java}
> def task(*args):
>   from pyspark import BarrierTaskContext
>   context = BarrierTaskContext.get()
>   context.barrier()
>   print("in barrier phase")
>   context.barrier()
>   return []
> a = sc.parallelize(list(range(4))).map(lambda x: x ** 2).collect()
> assert a == [0, 1, 4, 9]
> b = sc.parallelize(list(range(4)), 4).barrier().mapPartitions(task).collect()
> {code}
>  
> Here is some of the trace
> {code:java}
> Py4JJavaError: An error occurred while calling 
> z:org.apache.spark.api.python.PythonRDD.collectAndServe.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Could 
> not recover from a failed barrier ResultStage. Most recent failure reason: 
> Stage failed because barrier task ResultTask(14, 0) finished unsuccessfully.
> org.apache.spark.api.python.PythonException: Traceback (most recent call 
> last):
>   File "/databricks/spark/python/pyspark/worker.py", line 372, in main
> process()
>   File "/databricks/spark/python/pyspark/worker.py", line 367, in process
> serializer.dump_stream(func(split_index, iterator), outfile)
>   File "/databricks/spark/python/pyspark/rdd.py", line 2482, in func
> return f(iterator)
>   File "", line 4, in task
> AttributeError: 'TaskContext' object has no attribute 'barrier'
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-25921) Python worker reuse causes Barrier tasks to run without BarrierTaskContext

2018-11-01 Thread Xiangrui Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-25921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672350#comment-16672350
 ] 

Xiangrui Meng commented on SPARK-25921:
---

Tested offline. This was caused by PySpark worker reuse. A workaround is to 
disable worker reuse. Not a blocker for 2.4, but we should list it as a known 
issue and fix it in 2.4.1. cc: [~cloud_fan] [~smilegator]

> Python worker reuse causes Barrier tasks to run without BarrierTaskContext
> --
>
> Key: SPARK-25921
> URL: https://issues.apache.org/jira/browse/SPARK-25921
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, Spark Core
>Affects Versions: 2.4.0
>Reporter: Bago Amirbekian
>Priority: Critical
>
> Running a barrier job after a normal spark job causes the barrier job to run 
> without a BarrierTaskContext. Here is some code to reproduce.
>  
> {code:java}
> def task(*args):
>   from pyspark import BarrierTaskContext
>   context = BarrierTaskContext.get()
>   context.barrier()
>   print("in barrier phase")
>   context.barrier()
>   return []
> a = sc.parallelize(list(range(4))).map(lambda x: x ** 2).collect()
> assert a == [0, 1, 4, 9]
> b = sc.parallelize(list(range(4)), 4).barrier().mapPartitions(task).collect()
> {code}
>  
> Here is some of the trace
> {code:java}
> Py4JJavaError: An error occurred while calling 
> z:org.apache.spark.api.python.PythonRDD.collectAndServe.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Could 
> not recover from a failed barrier ResultStage. Most recent failure reason: 
> Stage failed because barrier task ResultTask(14, 0) finished unsuccessfully.
> org.apache.spark.api.python.PythonException: Traceback (most recent call 
> last):
>   File "/databricks/spark/python/pyspark/worker.py", line 372, in main
> process()
>   File "/databricks/spark/python/pyspark/worker.py", line 367, in process
> serializer.dump_stream(func(split_index, iterator), outfile)
>   File "/databricks/spark/python/pyspark/rdd.py", line 2482, in func
> return f(iterator)
>   File "", line 4, in task
> AttributeError: 'TaskContext' object has no attribute 'barrier'
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-25921) Python worker reuse causes Barrier tasks to run without BarrierTaskContext

2018-11-01 Thread Bago Amirbekian (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-25921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672293#comment-16672293
 ] 

Bago Amirbekian commented on SPARK-25921:
-

[~mengxr] [~jiangxb1987] Could you have a look.

> Python worker reuse causes Barrier tasks to run without BarrierTaskContext
> --
>
> Key: SPARK-25921
> URL: https://issues.apache.org/jira/browse/SPARK-25921
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, Spark Core
>Affects Versions: 2.4.0
>Reporter: Bago Amirbekian
>Priority: Major
>
> Running a barrier job after a normal spark job causes the barrier job to run 
> without a BarrierTaskContext. Here is some code to reproduce.
>  
> {code:java}
> def task(*args):
>   from pyspark import BarrierTaskContext
>   context = BarrierTaskContext.get()
>   context.barrier()
>   print("in barrier phase")
>   context.barrier()
>   return []
> a = sc.parallelize(list(range(4))).map(lambda x: x ** 2).collect()
> assert a == [0, 1, 4, 9]
> b = sc.parallelize(list(range(4)), 4).barrier().mapPartitions(task).collect()
> {code}
>  
> Here is some of the trace
> {code:java}
> Py4JJavaError: An error occurred while calling 
> z:org.apache.spark.api.python.PythonRDD.collectAndServe.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Could 
> not recover from a failed barrier ResultStage. Most recent failure reason: 
> Stage failed because barrier task ResultTask(14, 0) finished unsuccessfully.
> org.apache.spark.api.python.PythonException: Traceback (most recent call 
> last):
>   File "/databricks/spark/python/pyspark/worker.py", line 372, in main
> process()
>   File "/databricks/spark/python/pyspark/worker.py", line 367, in process
> serializer.dump_stream(func(split_index, iterator), outfile)
>   File "/databricks/spark/python/pyspark/rdd.py", line 2482, in func
> return f(iterator)
>   File "", line 4, in task
> AttributeError: 'TaskContext' object has no attribute 'barrier'
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org