[ 
https://issues.apache.org/jira/browse/SPARK-13333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15149053#comment-15149053
 ] 

Joseph K. Bradley commented on SPARK-13333:
-------------------------------------------

I now have a much more complex example which does not use unionAll.  But it's 
still an issue with randn, so I suspect it's the same bug.  If needed, I can 
push a branch, but it's a mess of code.

[~smilegator] Thanks for taking a look.  I'll keep watching the JIRA!

> DataFrame filter + randn + unionAll has bad interaction
> -------------------------------------------------------
>
>                 Key: SPARK-13333
>                 URL: https://issues.apache.org/jira/browse/SPARK-13333
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.2, 1.6.1, 2.0.0
>            Reporter: Joseph K. Bradley
>
> Buggy workflow
> * Create a DataFrame df0
> * Filter df0
> * Add a randn column
> * Create a copy of the DataFrame
> * unionAll the two DataFrames
> This fails, where randn produces the same results on the original DataFrame 
> and the copy before unionAll but fails to do so after unionAll.  Removing the 
> filter fixes the problem.
> The bug can be reproduced on master:
> {code}
> import org.apache.spark.sql.functions.randn
> val df0 = sqlContext.createDataFrame(Seq(0, 1).map(Tuple1(_))).toDF("id")
> // Removing the following filter() call makes this give the expected result.
> val df1 = df0.filter(col("id") === 0).withColumn("b", randn(12345))
> println("DF1")
> df1.show()
> val df2 = df1.select("id", "b")
> println("DF2")
> df2.show()  // same as df1.show(), as expected
> val df3 = df1.unionAll(df2)
> println("DF3")
> df3.show()  // NOT two copies of df1, which is unexpected
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to