[jira] [Assigned] (SPARK-36867) Misleading Error Message with Invalid Column and Group By

2021-10-12 Thread Wenchen Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan reassigned SPARK-36867:
---

Assignee: Wenchen Fan

> Misleading Error Message with Invalid Column and Group By
> -
>
> Key: SPARK-36867
> URL: https://issues.apache.org/jira/browse/SPARK-36867
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.1.2
>Reporter: Alan Jackoway
>Assignee: Wenchen Fan
>Priority: Major
> Fix For: 3.3.0
>
>
> When you run a query with an invalid column that also does a group by on a 
> constructed column, the error message you get back references a missing 
> column for the group by rather than the invalid column.
> You can reproduce this in pyspark in 3.1.2 with the following code:
> {code:python}
> from pyspark.sql import SparkSession
> spark = SparkSession.builder.appName("Group By Issue").getOrCreate()
> data = spark.createDataFrame(
> [("2021-09-15", 1), ("2021-09-16", 2), ("2021-09-17", 10), ("2021-09-18", 
> 25), ("2021-09-19", 500), ("2021-09-20", 50), ("2021-09-21", 100)],
> schema=["d", "v"]
> )
> data.createOrReplaceTempView("data")
> # This is valid
> spark.sql("select sum(v) as value, date(date_trunc('week', d)) as week from 
> data group by week").show()
> # This is invalid because val is the wrong variable
> spark.sql("select sum(val) as value, date(date_trunc('week', d)) as week from 
> data group by week").show()
> {code}
> The error message for the second spark.sql line is
> {quote}
> pyspark.sql.utils.AnalysisException: cannot resolve '`week`' given input 
> columns: [data.d, data.v]; line 1 pos 81;
> 'Aggregate ['week], ['sum('val) AS value#21, cast(date_trunc(week, cast(d#0 
> as timestamp), Some(America/New_York)) as date) AS week#22]
> +- SubqueryAlias data
>+- LogicalRDD [d#0, v#1L], false
> {quote}
> but the actual problem is that I used the wrong variable name in a different 
> part of the query. Nothing is wrong with {{week}} in this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-36867) Misleading Error Message with Invalid Column and Group By

2021-10-11 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-36867:


Assignee: (was: Apache Spark)

> Misleading Error Message with Invalid Column and Group By
> -
>
> Key: SPARK-36867
> URL: https://issues.apache.org/jira/browse/SPARK-36867
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.1.2
>Reporter: Alan Jackoway
>Priority: Major
>
> When you run a query with an invalid column that also does a group by on a 
> constructed column, the error message you get back references a missing 
> column for the group by rather than the invalid column.
> You can reproduce this in pyspark in 3.1.2 with the following code:
> {code:python}
> from pyspark.sql import SparkSession
> spark = SparkSession.builder.appName("Group By Issue").getOrCreate()
> data = spark.createDataFrame(
> [("2021-09-15", 1), ("2021-09-16", 2), ("2021-09-17", 10), ("2021-09-18", 
> 25), ("2021-09-19", 500), ("2021-09-20", 50), ("2021-09-21", 100)],
> schema=["d", "v"]
> )
> data.createOrReplaceTempView("data")
> # This is valid
> spark.sql("select sum(v) as value, date(date_trunc('week', d)) as week from 
> data group by week").show()
> # This is invalid because val is the wrong variable
> spark.sql("select sum(val) as value, date(date_trunc('week', d)) as week from 
> data group by week").show()
> {code}
> The error message for the second spark.sql line is
> {quote}
> pyspark.sql.utils.AnalysisException: cannot resolve '`week`' given input 
> columns: [data.d, data.v]; line 1 pos 81;
> 'Aggregate ['week], ['sum('val) AS value#21, cast(date_trunc(week, cast(d#0 
> as timestamp), Some(America/New_York)) as date) AS week#22]
> +- SubqueryAlias data
>+- LogicalRDD [d#0, v#1L], false
> {quote}
> but the actual problem is that I used the wrong variable name in a different 
> part of the query. Nothing is wrong with {{week}} in this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-36867) Misleading Error Message with Invalid Column and Group By

2021-10-11 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-36867:


Assignee: Apache Spark

> Misleading Error Message with Invalid Column and Group By
> -
>
> Key: SPARK-36867
> URL: https://issues.apache.org/jira/browse/SPARK-36867
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.1.2
>Reporter: Alan Jackoway
>Assignee: Apache Spark
>Priority: Major
>
> When you run a query with an invalid column that also does a group by on a 
> constructed column, the error message you get back references a missing 
> column for the group by rather than the invalid column.
> You can reproduce this in pyspark in 3.1.2 with the following code:
> {code:python}
> from pyspark.sql import SparkSession
> spark = SparkSession.builder.appName("Group By Issue").getOrCreate()
> data = spark.createDataFrame(
> [("2021-09-15", 1), ("2021-09-16", 2), ("2021-09-17", 10), ("2021-09-18", 
> 25), ("2021-09-19", 500), ("2021-09-20", 50), ("2021-09-21", 100)],
> schema=["d", "v"]
> )
> data.createOrReplaceTempView("data")
> # This is valid
> spark.sql("select sum(v) as value, date(date_trunc('week', d)) as week from 
> data group by week").show()
> # This is invalid because val is the wrong variable
> spark.sql("select sum(val) as value, date(date_trunc('week', d)) as week from 
> data group by week").show()
> {code}
> The error message for the second spark.sql line is
> {quote}
> pyspark.sql.utils.AnalysisException: cannot resolve '`week`' given input 
> columns: [data.d, data.v]; line 1 pos 81;
> 'Aggregate ['week], ['sum('val) AS value#21, cast(date_trunc(week, cast(d#0 
> as timestamp), Some(America/New_York)) as date) AS week#22]
> +- SubqueryAlias data
>+- LogicalRDD [d#0, v#1L], false
> {quote}
> but the actual problem is that I used the wrong variable name in a different 
> part of the query. Nothing is wrong with {{week}} in this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org