[ https://issues.apache.org/jira/browse/SPARK-13691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15183438#comment-15183438 ]
Bryan Cutler commented on SPARK-13691: -------------------------------------- The reason for this is that Pyspark serializes the closure (including dependent variables) into a command and then uses that to construct a {{PythonRDD}} which sends the command to a Python worker on {{RDD.compute}}. > Scala and Python generate inconsistent results > ---------------------------------------------- > > Key: SPARK-13691 > URL: https://issues.apache.org/jira/browse/SPARK-13691 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 1.4.1, 1.5.2, 1.6.0 > Reporter: Shixiong Zhu > > Here is an example that Scala and Python generate different results > {code} > Scala: > scala> var i = 0 > i: Int = 0 > scala> val rdd = sc.parallelize(1 to 10).map(_ + i) > scala> rdd.collect() > res0: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) > scala> i += 1 > scala> rdd.collect() > res2: Array[Int] = Array(2, 3, 4, 5, 6, 7, 8, 9, 10, 11) > Python: > >>> i = 0 > >>> rdd = sc.parallelize(range(1, 10)).map(lambda x: x + i) > >>> rdd.collect() > [1, 2, 3, 4, 5, 6, 7, 8, 9] > >>> i += 1 > >>> rdd.collect() > [1, 2, 3, 4, 5, 6, 7, 8, 9] > {code} > The difference is Scala will capture all variables' values when running a job > every time, but Python just captures variables' values once and always uses > them for all jobs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org