[ https://issues.apache.org/jira/browse/SPARK-42346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17685466#comment-17685466 ]
Ritika Maheshwari commented on SPARK-42346: ------------------------------------------- Hello added three rows to input_table. Still no error. I do have DPP enabled. ********************************************************************* Using Scala version 2.12.15 (Java HotSpot(TM) 64-Bit Server VM, Java 12.0.2) Type in expressions to have them evaluated. Type :help for more information. scala> val df = Seq(("a","b"),("c","d"),("e","f")).toDF("surname","first_name") *df*: *org.apache.spark.sql.DataFrame* = [surname: string, first_name: string] scala> df.createOrReplaceTempView("input_table") scala> spark.sql("select(Select Count(Distinct first_name) from input_table) As distinct_value_count from input_table Union all select (select count(Distinct surname) from input_table) as distinct_value_count from input_table").show() +--------------------+ |distinct_value_count| +--------------------+ | 3| | 3| | 3| | 3| | 3| | 3| +--------------------+ ************************************************************** AdaptiveSparkPlan isFinalPlan=false +- Union :- Project [cast(Subquery subquery#145, [id=#571] as string) AS distinct_value_count#161] : : +- Subquery subquery#145, [id=#571] : : +- AdaptiveSparkPlan isFinalPlan=false : : +- HashAggregate(keys=[], functions=[count(distinct first_name#8)], output=[count(DISTINCT first_name)#152L]) : : +- Exchange SinglePartition, ENSURE_REQUIREMENTS, [id=#569] : : +- HashAggregate(keys=[], functions=[partial_count(distinct first_name#8)], output=[count#167L]) : : +- HashAggregate(keys=[first_name#8], functions=[], output=[first_name#8]) : : +- Exchange hashpartitioning(first_name#8, 200), ENSURE_REQUIREMENTS, [id=#565] : : +- HashAggregate(keys=[first_name#8], functions=[], output=[first_name#8]) : : +- LocalTableScan [first_name#8] : +- LocalTableScan [_1#2, _2#3] +- Project [cast(Subquery subquery#147, [id=#590] as string) AS distinct_value_count#163] : +- Subquery subquery#147, [id=#590] : +- AdaptiveSparkPlan isFinalPlan=false : +- HashAggregate(keys=[], functions=[count(distinct surname#7)], output=[count(DISTINCT surname)#154L]) : +- Exchange SinglePartition, ENSURE_REQUIREMENTS, [id=#588] : +- HashAggregate(keys=[], functions=[partial_count(distinct surname#7)], output=[count#170L]) : +- HashAggregate(keys=[surname#7], functions=[], output=[surname#7]) : +- Exchange hashpartitioning(surname#7, 200), ENSURE_REQUIREMENTS, [id=#584] : +- HashAggregate(keys=[surname#7], functions=[], output=[surname#7]) : +- LocalTableScan [surname#7] +- LocalTableScan [_1#149, _2#150] > distinct(count colname) with UNION ALL causes query analyzer bug > ---------------------------------------------------------------- > > Key: SPARK-42346 > URL: https://issues.apache.org/jira/browse/SPARK-42346 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 3.3.0, 3.4.0, 3.5.0 > Reporter: Robin > Assignee: Peter Toth > Priority: Major > Fix For: 3.3.2, 3.4.0, 3.5.0 > > > If you combine a UNION ALL with a count(distinct colname) you get a query > analyzer bug. > > This behaviour is introduced in 3.3.0. The bug was not present in 3.2.1. > > Here is a reprex in PySpark: > {{df_pd = pd.DataFrame([}} > {{ \{'surname': 'a', 'first_name': 'b'}}} > {{])}} > {{df_spark = spark.createDataFrame(df_pd)}} > {{df_spark.createOrReplaceTempView("input_table")}} > {{sql = """}} > {{SELECT }} > {{ (SELECT Count(DISTINCT first_name) FROM input_table) }} > {{ AS distinct_value_count}} > {{FROM input_table}} > {{UNION ALL}} > {{SELECT }} > {{ (SELECT Count(DISTINCT surname) FROM input_table) }} > {{ AS distinct_value_count}} > {{FROM input_table """}} > {{spark.sql(sql).toPandas()}} > -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org