[ https://issues.apache.org/jira/browse/SPARK-20246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Weiluo Ren updated SPARK-20246: ------------------------------- Description: `import org.apache.spark.sql.functions._` `spark.range(1,1000).distinct.withColumn("random", rand()).filter(col("random") > 0.3).orderBy("random").show` gives wrong result. In the optimized logical plan, it shows that the filter with the non-deterministic predicate is pushed beneath the aggregate operator, which should not happen. cc [~lian cheng] was: ` import org.apache.spark.sql.functions._ spark.range(1,1000).distinct.withColumn("random", rand()).filter(col("random") > 0.3).orderBy("random").show ` gives wrong result. In the optimized logical plan, it shows that the filter with the non-deterministic predicate is pushed beneath the aggregate operator, which should not happen. cc [~lian cheng] > Should check determinism when pushing predicates down through aggregation > ------------------------------------------------------------------------- > > Key: SPARK-20246 > URL: https://issues.apache.org/jira/browse/SPARK-20246 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.1.0 > Reporter: Weiluo Ren > > `import org.apache.spark.sql.functions._` > `spark.range(1,1000).distinct.withColumn("random", > rand()).filter(col("random") > 0.3).orderBy("random").show` > gives wrong result. > In the optimized logical plan, it shows that the filter with the > non-deterministic predicate is pushed beneath the aggregate operator, which > should not happen. > cc [~lian cheng] -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org