[ 
https://issues.apache.org/jira/browse/SPARK-13333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15151720#comment-15151720
 ] 

Liang-Chi Hsieh commented on SPARK-13333:
-----------------------------------------

Yes. I agree that when user provides a specific seed number, the result should 
be deterministic. The problem is when you have many data partitions, you will 
have same random number sequences in each partition.

E.g, you have a table with 1000 rows evenly distributed in 5 partitions. If you 
do something required with random number involved for each row, your 201 ~ 400 
rows will be processed with same random numbers as the first 200 rows. Then 
your computation will not be random as it requires at the beginning. Your each 
partition will share a same pattern.

> DataFrame filter + randn + unionAll has bad interaction
> -------------------------------------------------------
>
>                 Key: SPARK-13333
>                 URL: https://issues.apache.org/jira/browse/SPARK-13333
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.2, 1.6.1, 2.0.0
>            Reporter: Joseph K. Bradley
>
> Buggy workflow
> * Create a DataFrame df0
> * Filter df0
> * Add a randn column
> * Create a copy of the DataFrame
> * unionAll the two DataFrames
> This fails, where randn produces the same results on the original DataFrame 
> and the copy before unionAll but fails to do so after unionAll.  Removing the 
> filter fixes the problem.
> The bug can be reproduced on master:
> {code}
> import org.apache.spark.sql.functions.randn
> val df0 = sqlContext.createDataFrame(Seq(0, 1).map(Tuple1(_))).toDF("id")
> // Removing the following filter() call makes this give the expected result.
> val df1 = df0.filter(col("id") === 0).withColumn("b", randn(12345))
> println("DF1")
> df1.show()
> val df2 = df1.select("id", "b")
> println("DF2")
> df2.show()  // same as df1.show(), as expected
> val df3 = df1.unionAll(df2)
> println("DF3")
> df3.show()  // NOT two copies of df1, which is unexpected
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to