[jira] [Updated] (SPARK-39376) Do not output duplicated columns in star expansion of subquery alias of NATURAL/USING JOIN

2022-06-03 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-39376:
---
Description: 
A bug was introduced in https://issues.apache.org/jira/browse/SPARK-34527 such 
that the duplicated columns within a NATURAL/USING JOIN were output from the 
qualified star of a subquery alias. For example:

{code:java}
val df1 = Seq((3, 8)).toDF("a", "b") 
val df2 = Seq((8, 7)).toDF("b", "d") 
val joinDF = df1.join(df2, "b")
joinDF.alias("r").select("r.*")
{code}

Outputs two duplicate `b` columns, instead of just one.

  was:
A bug was introduced in https://issues.apache.org/jira/browse/SPARK-34527 such 
that the duplicated columns within a NATURAL/USING JOIN were output from the 
qualified star of a subquery alias. For example:

```
val df1 = Seq((3, 8)).toDF("a", "b") 
val df2 = Seq((8, 7)).toDF("b", "d") 
val joinDF = df1.join(df2, "b")
joinDF.alias("r").select("r.*")
```

Output two duplicate `b` columns.


> Do not output duplicated columns in star expansion of subquery alias of 
> NATURAL/USING JOIN
> --
>
> Key: SPARK-39376
> URL: https://issues.apache.org/jira/browse/SPARK-39376
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> A bug was introduced in https://issues.apache.org/jira/browse/SPARK-34527 
> such that the duplicated columns within a NATURAL/USING JOIN were output from 
> the qualified star of a subquery alias. For example:
> {code:java}
> val df1 = Seq((3, 8)).toDF("a", "b") 
> val df2 = Seq((8, 7)).toDF("b", "d") 
> val joinDF = df1.join(df2, "b")
> joinDF.alias("r").select("r.*")
> {code}
> Outputs two duplicate `b` columns, instead of just one.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-39376) Do not output duplicated columns in star expansion of subquery alias of NATURAL/USING JOIN

2022-06-03 Thread Karen Feng (Jira)
Karen Feng created SPARK-39376:
--

 Summary: Do not output duplicated columns in star expansion of 
subquery alias of NATURAL/USING JOIN
 Key: SPARK-39376
 URL: https://issues.apache.org/jira/browse/SPARK-39376
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


A bug was introduced in https://issues.apache.org/jira/browse/SPARK-34527 such 
that the duplicated columns within a NATURAL/USING JOIN were output from the 
qualified star of a subquery alias. For example:

```
val df1 = Seq((3, 8)).toDF("a", "b") 
val df2 = Seq((8, 7)).toDF("b", "d") 
val joinDF = df1.join(df2, "b")
joinDF.alias("r").select("r.*")
```

Output two duplicate `b` columns.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-39261) Improve newline formatting for error messages

2022-05-23 Thread Karen Feng (Jira)
Karen Feng created SPARK-39261:
--

 Summary: Improve newline formatting for error messages
 Key: SPARK-39261
 URL: https://issues.apache.org/jira/browse/SPARK-39261
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.3.0
Reporter: Karen Feng


Error messages in the JSON file should not contain newline characters; newlines 
are delineated as different elements in the array. We should check that these 
newlines do not exist, and improve the formatting of the file.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37859) SQL tables created with JDBC with Spark 3.1 are not readable with 3.2

2022-01-10 Thread Karen Feng (Jira)
Karen Feng created SPARK-37859:
--

 Summary: SQL tables created with JDBC with Spark 3.1 are not 
readable with 3.2
 Key: SPARK-37859
 URL: https://issues.apache.org/jira/browse/SPARK-37859
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


In 
https://github.com/apache/spark/blob/bd24b4884b804fc85a083f82b864823851d5980c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L312,
 a new metadata field is added during reading. As we do a full comparison of 
the user-provided schema and the actual schema in 
https://github.com/apache/spark/blob/bd24b4884b804fc85a083f82b864823851d5980c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala#L356,
 resolution fails if a table created with Spark 3.1 is read with Spark 3.2.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37092) Add Spark error classes to error message and enforce test coverage

2021-10-21 Thread Karen Feng (Jira)
Karen Feng created SPARK-37092:
--

 Summary: Add Spark error classes to error message and enforce test 
coverage
 Key: SPARK-37092
 URL: https://issues.apache.org/jira/browse/SPARK-37092
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.3.0
Reporter: Karen Feng


We should add Spark error classes to the error message and make sure that all 
error classes are tested. This will help us understand the error cases, remove 
dead code, and improve the error messages and classes as we refactor the error 
messages.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36943) Improve error message for missing column

2021-10-06 Thread Karen Feng (Jira)
Karen Feng created SPARK-36943:
--

 Summary: Improve error message for missing column
 Key: SPARK-36943
 URL: https://issues.apache.org/jira/browse/SPARK-36943
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core, SQL
Affects Versions: 3.3.0
Reporter: Karen Feng


Improve the error message for the case that a user asks for a column that does 
not exist.
Today, the message is "cannot resolve 'foo' given input columns [bar, baz, 
froo]".
We should sort the suggestion list by similarity and improve the grammar to 
remove lingo like "resolve."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36870) Introduce INTERNAL_ERROR error class

2021-09-27 Thread Karen Feng (Jira)
Karen Feng created SPARK-36870:
--

 Summary: Introduce INTERNAL_ERROR error class
 Key: SPARK-36870
 URL: https://issues.apache.org/jira/browse/SPARK-36870
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.3.0
Reporter: Karen Feng


Introduces the INTERNAL_ERROR error class; this will be used to determine if an 
exception is an internal error and is useful for end-users and developers to 
diagnose whether an issue should be reported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36687) Rename error classes with _ERROR suffix

2021-09-07 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36687:
---
Parent: SPARK-36094
Issue Type: Sub-task  (was: Task)

> Rename error classes with _ERROR suffix
> ---
>
> Key: SPARK-36687
> URL: https://issues.apache.org/jira/browse/SPARK-36687
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.1
>Reporter: Karen Feng
>Priority: Trivial
>
> Clean up error classes with the redundant _ERROR suffix to reduce clutter, 
> such as 
> [CONCURRENT_QUERY_ERROR|https://github.com/apache/spark/blob/f78d8394dcf19891141e353ea3b6a76020faf844/core/src/main/resources/error/error-classes.json#L6].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36687) Rename error classes with _ERROR suffix

2021-09-07 Thread Karen Feng (Jira)
Karen Feng created SPARK-36687:
--

 Summary: Rename error classes with _ERROR suffix
 Key: SPARK-36687
 URL: https://issues.apache.org/jira/browse/SPARK-36687
 Project: Spark
  Issue Type: Task
  Components: Spark Core, SQL
Affects Versions: 3.2.1
Reporter: Karen Feng


Clean up error classes with the redundant _ERROR suffix to reduce clutter, such 
as 
[CONCURRENT_QUERY_ERROR|https://github.com/apache/spark/blob/f78d8394dcf19891141e353ea3b6a76020faf844/core/src/main/resources/error/error-classes.json#L6].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36405) Check that error class SQLSTATEs are valid

2021-08-03 Thread Karen Feng (Jira)
Karen Feng created SPARK-36405:
--

 Summary: Check that error class SQLSTATEs are valid
 Key: SPARK-36405
 URL: https://issues.apache.org/jira/browse/SPARK-36405
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.2.0
Reporter: Karen Feng


Using the SQLSTATEs in the error class README as the source of truth, we should 
validate the SQLSTATEs in the error class JSON.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36094) Group SQL component error messages in Spark error class JSON file

2021-07-28 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36094:
---
Description: 
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first.
 As a starting point, we can build off the exception grouping done in 
SPARK-33539. In total, there are ~1000 error messages to group split across 
three files (QueryCompilationErrors, QueryExecutionErrors, and 
QueryParsingErrors). In this ticket, each of these files is split into chunks 
of ~20 errors for refactoring.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].

[Guidelines|https://github.com/apache/spark/blob/master/core/src/main/resources/error/README.md]:
 - Error classes should be unique and sorted in alphabetical order.
 - Error classes should be unified as much as possible to improve auditing. If 
error messages are similar, group them into a single error class and add 
parameters to the error message.
 - SQLSTATE should match the ANSI/ISO standard, without introducing new classes 
or subclasses. See the error 
[guidelines|https://github.com/apache/spark/blob/master/core/src/main/resources/error/README.md];
 if none of them match, the SQLSTATE field should be empty.
 - The Throwable should extend 
[SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java];
 see 
[SparkArithmeticException|https://github.com/apache/spark/blob/f90eb6a5db0778fd18b0b544f93eac3103bbf03b/core/src/main/scala/org/apache/spark/SparkException.scala#L75]
 as an example of how to mix SparkThrowable into a base Exception type.

We will improve error message quality as a follow-up.

  was:
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first.
 As a starting point, we can build off the exception grouping done in 
SPARK-33539. In total, there are ~1000 error messages to group split across 
three files (QueryCompilationErrors, QueryExecutionErrors, and 
QueryParsingErrors). In this ticket, each of these files is split into chunks 
of ~20 errors for refactoring.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].

[Guidelines|https://github.com/apache/spark/blob/master/core/src/main/resources/error/README.md]:
 - Error classes should be unique and sorted in alphabetical order.
 - Error classes should be unified as much as possible to improve auditing. If 
error messages are similar, group them into a single error class and add 
parameters to the error message.
 - SQLSTATE should match the ANSI/ISO standard, without introducing new classes 
or subclasses.
 - The Throwable should extend 
[SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java];
 see 
[SparkArithmeticException|https://github.com/apache/spark/blob/f90eb6a5db0778fd18b0b544f93eac3103bbf03b/core/src/main/scala/org/apache/spark/SparkException.scala#L75]
 as an example of how to mix SparkThrowable into a base Exception type.

We will improve error message quality as a follow-up.


> Group SQL component error messages in Spark error class JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file (as 
> discussed in the [mailing 
> list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
>  and introduced in 
> 

[jira] [Created] (SPARK-36331) Add SQLSTATE guideline

2021-07-28 Thread Karen Feng (Jira)
Karen Feng created SPARK-36331:
--

 Summary: Add SQLSTATE guideline
 Key: SPARK-36331
 URL: https://issues.apache.org/jira/browse/SPARK-36331
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.2.0
Reporter: Karen Feng


Add SQLSTATE guideline to the error guidelines.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36094) Group SQL component error messages in Spark error class JSON file

2021-07-27 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36094:
---
Description: 
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first.
 As a starting point, we can build off the exception grouping done in 
SPARK-33539. In total, there are ~1000 error messages to group split across 
three files (QueryCompilationErrors, QueryExecutionErrors, and 
QueryParsingErrors). In this ticket, each of these files is split into chunks 
of ~20 errors for refactoring.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].

[Guidelines|https://github.com/apache/spark/blob/master/core/src/main/resources/error/README.md]:
 - Error classes should be unique and sorted in alphabetical order.
 - Error classes should be unified as much as possible to improve auditing. If 
error messages are similar, group them into a single error class and add 
parameters to the error message.
 - SQLSTATE should match the ANSI/ISO standard, without introducing new classes 
or subclasses.
 - The Throwable should extend 
[SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java];
 see 
[SparkArithmeticException|https://github.com/apache/spark/blob/f90eb6a5db0778fd18b0b544f93eac3103bbf03b/core/src/main/scala/org/apache/spark/SparkException.scala#L75]
 as an example of how to mix SparkThrowable into a base Exception type.

We will improve error message quality as a follow-up.

  was:
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first.
As a starting point, we can build off the exception grouping done in 
[SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In total, 
there are ~1000 error messages to group split across three files 
(QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). In this 
ticket, each of these files is split into chunks of ~20 errors for refactoring.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].

[Guidelines|https://github.com/apache/spark/blob/master/core/src/main/resources/error/README.md]:

- Error classes should be de-duplicated as much as possible to improve 
auditing. If error messages are similar, group them into a single error class 
and add parameters to the error message.
- SQLSTATE should match the ANSI/ISO standard, without introducing new classes 
or subclasses.
- The Throwable should extend 
[SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java];
 see 
[SparkArithmeticException|https://github.com/apache/spark/blob/f90eb6a5db0778fd18b0b544f93eac3103bbf03b/core/src/main/scala/org/apache/spark/SparkException.scala#L75]
 as an example of how to mix SparkThrowable into a base Exception type.

We will improve error message quality as a follow-up.


> Group SQL component error messages in Spark error class JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file (as 
> discussed in the [mailing 
> list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
>  and introduced in 
> [SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
>  In this file, the error messages should be labeled according to a consistent 
> error class and with a SQLSTATE.
> We will start with 

[jira] [Updated] (SPARK-36094) Group SQL component error messages in Spark error class JSON file

2021-07-27 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36094:
---
Description: 
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first.
As a starting point, we can build off the exception grouping done in 
[SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In total, 
there are ~1000 error messages to group split across three files 
(QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). In this 
ticket, each of these files is split into chunks of ~20 errors for refactoring.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].

[Guidelines|https://github.com/apache/spark/blob/master/core/src/main/resources/error/README.md]:

- Error classes should be de-duplicated as much as possible to improve 
auditing. If error messages are similar, group them into a single error class 
and add parameters to the error message.
- SQLSTATE should match the ANSI/ISO standard, without introducing new classes 
or subclasses.
- The Throwable should extend 
[SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java];
 see 
[SparkArithmeticException|https://github.com/apache/spark/blob/f90eb6a5db0778fd18b0b544f93eac3103bbf03b/core/src/main/scala/org/apache/spark/SparkException.scala#L75]
 as an example of how to mix SparkThrowable into a base Exception type.

We will improve error message quality as a follow-up.

  was:
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first.
As a starting point, we can build off the exception grouping done in 
[SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In total, 
there are ~1000 error messages to group split across three files 
(QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). In this 
ticket, each of these files is split into chunks of ~20 errors for refactoring.

As a guideline, the error classes should be de-duplicated as much as possible 
to improve auditing.
We will improve error message quality as a follow-up.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].


> Group SQL component error messages in Spark error class JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file (as 
> discussed in the [mailing 
> list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
>  and introduced in 
> [SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
>  In this file, the error messages should be labeled according to a consistent 
> error class and with a SQLSTATE.
> We will start with the SQL component first.
> As a starting point, we can build off the exception grouping done in 
> [SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In total, 
> there are ~1000 error messages to group split across three files 
> (QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). In 
> this ticket, each of these files is split into chunks of ~20 errors for 
> refactoring.
> Here is an example PR that groups a few error messages in the 
> QueryCompilationErrors class: [PR 
> 33309|https://github.com/apache/spark/pull/33309].
> [Guidelines|https://github.com/apache/spark/blob/master/core/src/main/resources/error/README.md]:
> - Error classes should be 

[jira] [Updated] (SPARK-36094) Group SQL component error messages in Spark error class JSON file

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36094:
---
Description: 
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first.
As a starting point, we can build off the exception grouping done in 
[SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In total, 
there are ~1000 error messages to group split across three files 
(QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). In this 
ticket, each of these files is split into chunks of ~20 errors for refactoring.

As a guideline, the error classes should be de-duplicated as much as possible 
to improve auditing.
We will improve error message quality as a follow-up.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].

  was:
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first.
As a starting point, we can build off the exception grouping done in 
[SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In total, 
there are ~1000 error messages to group split across three files 
(QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). If you 
work on this ticket, please create a subtask to improve ease of reviewing.

As a guideline, the error classes should be de-duplicated as much as possible 
to improve auditing.
We will improve error message quality as a follow-up.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].


> Group SQL component error messages in Spark error class JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file (as 
> discussed in the [mailing 
> list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
>  and introduced in 
> [SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
>  In this file, the error messages should be labeled according to a consistent 
> error class and with a SQLSTATE.
> We will start with the SQL component first.
> As a starting point, we can build off the exception grouping done in 
> [SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In total, 
> there are ~1000 error messages to group split across three files 
> (QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). In 
> this ticket, each of these files is split into chunks of ~20 errors for 
> refactoring.
> As a guideline, the error classes should be de-duplicated as much as possible 
> to improve auditing.
> We will improve error message quality as a follow-up.
> Here is an example PR that groups a few error messages in the 
> QueryCompilationErrors class: [PR 
> 33309|https://github.com/apache/spark/pull/33309].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36309) Refactor fourth set of 20 query parsing errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36309:
---
Description: 
Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on 
the fourth set of 20.

{code}
showFunctionsUnsupportedError
duplicateCteDefinitionNamesError
sqlStatementUnsupportedError
unquotedIdentifierError
duplicateClausesError
duplicateKeysError
unexpectedFomatForSetConfigurationError
invalidPropertyKeyForSetQuotedConfigurationError
invalidPropertyValueForSetQuotedConfigurationError
unexpectedFormatForResetConfigurationError
intervalValueOutOfRangeError
invalidTimeZoneDisplacementValueError
createTempTableNotSpecifyProviderError
rowFormatNotUsedWithStoredAsError
useDefinedRecordReaderOrWriterClassesError
directoryPathAndOptionsPathBothSpecifiedError
unsupportedLocalFileSchemeError
invalidGroupingSetError
{code}

For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on 
the third set of 20.

{code}
fromToIntervalUnsupportedError
mixedIntervalUnitsError
dataTypeUnsupportedError
partitionTransformNotExpectedError
tooManyArgumentsForTransformError
notEnoughArgumentsForTransformError
invalidBucketsNumberError
invalidTransformArgumentError
cannotCleanReservedNamespacePropertyError
propertiesAndDbPropertiesBothSpecifiedError
fromOrInNotAllowedInShowDatabasesError
cannotCleanReservedTablePropertyError
duplicatedTablePathsFoundError
storedAsAndStoredByBothSpecifiedError
operationInHiveStyleCommandUnsupportedError
operationNotAllowedError
descColumnForPartitionUnsupportedError
incompletePartitionSpecificationError
computeStatisticsNotExpectedError
addCatalogInCacheTableAsSelectNotAllowedError
{code}

For more detail, see the parent ticket SPARK-36094.


> Refactor fourth set of 20 query parsing errors to use error classes
> ---
>
> Key: SPARK-36309
> URL: https://issues.apache.org/jira/browse/SPARK-36309
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
>  to use error classes.
> There are currently ~100 exceptions in this file; so this PR only focuses on 
> the fourth set of 20.
> {code}
> showFunctionsUnsupportedError
> duplicateCteDefinitionNamesError
> sqlStatementUnsupportedError
> unquotedIdentifierError
> duplicateClausesError
> duplicateKeysError
> unexpectedFomatForSetConfigurationError
> invalidPropertyKeyForSetQuotedConfigurationError
> invalidPropertyValueForSetQuotedConfigurationError
> unexpectedFormatForResetConfigurationError
> intervalValueOutOfRangeError
> invalidTimeZoneDisplacementValueError
> createTempTableNotSpecifyProviderError
> rowFormatNotUsedWithStoredAsError
> useDefinedRecordReaderOrWriterClassesError
> directoryPathAndOptionsPathBothSpecifiedError
> unsupportedLocalFileSchemeError
> invalidGroupingSetError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36107) Refactor first set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36107:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the second set of 20.
{code:java}
columnChangeUnsupportedError
logicalHintOperatorNotRemovedDuringAnalysisError
cannotEvaluateExpressionError
cannotGenerateCodeForExpressionError
cannotTerminateGeneratorError
castingCauseOverflowError
cannotChangeDecimalPrecisionError
invalidInputSyntaxForNumericError
cannotCastFromNullTypeError
cannotCastError
cannotParseDecimalError
simpleStringWithNodeIdUnsupportedError
evaluateUnevaluableAggregateUnsupportedError
dataTypeUnsupportedError
dataTypeUnsupportedError
failedExecuteUserDefinedFunctionError
divideByZeroError
invalidArrayIndexError
mapKeyNotExistError
rowFromCSVParserNotExpectedError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the first set of 20.
{code:java}
columnChangeUnsupportedError
logicalHintOperatorNotRemovedDuringAnalysisError
cannotEvaluateExpressionError
cannotGenerateCodeForExpressionError
cannotTerminateGeneratorError
castingCauseOverflowError
cannotChangeDecimalPrecisionError
invalidInputSyntaxForNumericError
cannotCastFromNullTypeError
cannotCastError
cannotParseDecimalError
simpleStringWithNodeIdUnsupportedError
evaluateUnevaluableAggregateUnsupportedError
dataTypeUnsupportedError
dataTypeUnsupportedError
failedExecuteUserDefinedFunctionError
divideByZeroError
invalidArrayIndexError
mapKeyNotExistError
rowFromCSVParserNotExpectedError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor first set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36107
> URL: https://issues.apache.org/jira/browse/SPARK-36107
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the second set of 20.
> {code:java}
> columnChangeUnsupportedError
> logicalHintOperatorNotRemovedDuringAnalysisError
> cannotEvaluateExpressionError
> cannotGenerateCodeForExpressionError
> cannotTerminateGeneratorError
> castingCauseOverflowError
> cannotChangeDecimalPrecisionError
> invalidInputSyntaxForNumericError
> cannotCastFromNullTypeError
> cannotCastError
> cannotParseDecimalError
> simpleStringWithNodeIdUnsupportedError
> evaluateUnevaluableAggregateUnsupportedError
> dataTypeUnsupportedError
> dataTypeUnsupportedError
> failedExecuteUserDefinedFunctionError
> divideByZeroError
> invalidArrayIndexError
> mapKeyNotExistError
> rowFromCSVParserNotExpectedError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36309) Refactor fourth set of 20 query parsing errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36309:
--

 Summary: Refactor fourth set of 20 query parsing errors to use 
error classes
 Key: SPARK-36309
 URL: https://issues.apache.org/jira/browse/SPARK-36309
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on 
the third set of 20.

{code}
fromToIntervalUnsupportedError
mixedIntervalUnitsError
dataTypeUnsupportedError
partitionTransformNotExpectedError
tooManyArgumentsForTransformError
notEnoughArgumentsForTransformError
invalidBucketsNumberError
invalidTransformArgumentError
cannotCleanReservedNamespacePropertyError
propertiesAndDbPropertiesBothSpecifiedError
fromOrInNotAllowedInShowDatabasesError
cannotCleanReservedTablePropertyError
duplicatedTablePathsFoundError
storedAsAndStoredByBothSpecifiedError
operationInHiveStyleCommandUnsupportedError
operationNotAllowedError
descColumnForPartitionUnsupportedError
incompletePartitionSpecificationError
computeStatisticsNotExpectedError
addCatalogInCacheTableAsSelectNotAllowedError
{code}

For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36308) Refactor third set of 20 query parsing errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36308:
---
Description: 
Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on 
the third set of 20.

{code}
fromToIntervalUnsupportedError
mixedIntervalUnitsError
dataTypeUnsupportedError
partitionTransformNotExpectedError
tooManyArgumentsForTransformError
notEnoughArgumentsForTransformError
invalidBucketsNumberError
invalidTransformArgumentError
cannotCleanReservedNamespacePropertyError
propertiesAndDbPropertiesBothSpecifiedError
fromOrInNotAllowedInShowDatabasesError
cannotCleanReservedTablePropertyError
duplicatedTablePathsFoundError
storedAsAndStoredByBothSpecifiedError
operationInHiveStyleCommandUnsupportedError
operationNotAllowedError
descColumnForPartitionUnsupportedError
incompletePartitionSpecificationError
computeStatisticsNotExpectedError
addCatalogInCacheTableAsSelectNotAllowedError
{code}

For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on 
the first set of 20.

{code}
repetitiveWindowDefinitionError
invalidWindowReferenceError
cannotResolveWindowReferenceError
joinCriteriaUnimplementedError
naturalCrossJoinUnsupportedError
emptyInputForTableSampleError
tableSampleByBytesUnsupportedError
invalidByteLengthLiteralError
invalidEscapeStringError
trimOptionUnsupportedError
functionNameUnsupportedError
cannotParseValueTypeError
cannotParseIntervalValueError
literalValueTypeUnsupportedError
parsingValueTypeError
invalidNumericLiteralRangeError
moreThanOneFromToUnitInIntervalLiteralError
invalidIntervalLiteralError
invalidIntervalFormError
invalidFromToUnitValueError
{code}

For more detail, see the parent ticket SPARK-36094.


> Refactor third set of 20 query parsing errors to use error classes
> --
>
> Key: SPARK-36308
> URL: https://issues.apache.org/jira/browse/SPARK-36308
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
>  to use error classes.
> There are currently ~100 exceptions in this file; so this PR only focuses on 
> the third set of 20.
> {code}
> fromToIntervalUnsupportedError
> mixedIntervalUnitsError
> dataTypeUnsupportedError
> partitionTransformNotExpectedError
> tooManyArgumentsForTransformError
> notEnoughArgumentsForTransformError
> invalidBucketsNumberError
> invalidTransformArgumentError
> cannotCleanReservedNamespacePropertyError
> propertiesAndDbPropertiesBothSpecifiedError
> fromOrInNotAllowedInShowDatabasesError
> cannotCleanReservedTablePropertyError
> duplicatedTablePathsFoundError
> storedAsAndStoredByBothSpecifiedError
> operationInHiveStyleCommandUnsupportedError
> operationNotAllowedError
> descColumnForPartitionUnsupportedError
> incompletePartitionSpecificationError
> computeStatisticsNotExpectedError
> addCatalogInCacheTableAsSelectNotAllowedError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36308) Refactor third set of 20 query parsing errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36308:
--

 Summary: Refactor third set of 20 query parsing errors to use 
error classes
 Key: SPARK-36308
 URL: https://issues.apache.org/jira/browse/SPARK-36308
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on 
the first set of 20.

{code}
repetitiveWindowDefinitionError
invalidWindowReferenceError
cannotResolveWindowReferenceError
joinCriteriaUnimplementedError
naturalCrossJoinUnsupportedError
emptyInputForTableSampleError
tableSampleByBytesUnsupportedError
invalidByteLengthLiteralError
invalidEscapeStringError
trimOptionUnsupportedError
functionNameUnsupportedError
cannotParseValueTypeError
cannotParseIntervalValueError
literalValueTypeUnsupportedError
parsingValueTypeError
invalidNumericLiteralRangeError
moreThanOneFromToUnitInIntervalLiteralError
invalidIntervalLiteralError
invalidIntervalFormError
invalidFromToUnitValueError
{code}

For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36307) Refactor second set of 20 query parsing errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36307:
---
Description: 
Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on 
the first set of 20.

{code}
repetitiveWindowDefinitionError
invalidWindowReferenceError
cannotResolveWindowReferenceError
joinCriteriaUnimplementedError
naturalCrossJoinUnsupportedError
emptyInputForTableSampleError
tableSampleByBytesUnsupportedError
invalidByteLengthLiteralError
invalidEscapeStringError
trimOptionUnsupportedError
functionNameUnsupportedError
cannotParseValueTypeError
cannotParseIntervalValueError
literalValueTypeUnsupportedError
parsingValueTypeError
invalidNumericLiteralRangeError
moreThanOneFromToUnitInIntervalLiteralError
invalidIntervalLiteralError
invalidIntervalFormError
invalidFromToUnitValueError
{code}

For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on 
the first set of 20.

{code}
invalidInsertIntoError
insertOverwriteDirectoryUnsupportedError
columnAliasInOperationNotAllowedError
emptySourceForMergeError
unrecognizedMatchedActionError
insertedValueNumberNotMatchFieldNumberError
unrecognizedNotMatchedActionError
mergeStatementWithoutWhenClauseError
nonLastMatchedClauseOmitConditionError
nonLastNotMatchedClauseOmitConditionError
emptyPartitionKeyError
combinationQueryResultClausesUnsupportedError
distributeByUnsupportedError
transformNotSupportQuantifierError
transformWithSerdeUnsupportedError
lateralWithPivotInFromClauseNotAllowedError
lateralJoinWithNaturalJoinUnsupportedError
lateralJoinWithUsingJoinUnsupportedError
unsupportedLateralJoinTypeError
invalidLateralJoinRelationError
{code}

For more detail, see the parent ticket SPARK-36094.


> Refactor second set of 20 query parsing errors to use error classes
> ---
>
> Key: SPARK-36307
> URL: https://issues.apache.org/jira/browse/SPARK-36307
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
>  to use error classes.
> There are currently ~100 exceptions in this file; so this PR only focuses on 
> the first set of 20.
> {code}
> repetitiveWindowDefinitionError
> invalidWindowReferenceError
> cannotResolveWindowReferenceError
> joinCriteriaUnimplementedError
> naturalCrossJoinUnsupportedError
> emptyInputForTableSampleError
> tableSampleByBytesUnsupportedError
> invalidByteLengthLiteralError
> invalidEscapeStringError
> trimOptionUnsupportedError
> functionNameUnsupportedError
> cannotParseValueTypeError
> cannotParseIntervalValueError
> literalValueTypeUnsupportedError
> parsingValueTypeError
> invalidNumericLiteralRangeError
> moreThanOneFromToUnitInIntervalLiteralError
> invalidIntervalLiteralError
> invalidIntervalFormError
> invalidFromToUnitValueError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36307) Refactor second set of 20 query parsing errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36307:
--

 Summary: Refactor second set of 20 query parsing errors to use 
error classes
 Key: SPARK-36307
 URL: https://issues.apache.org/jira/browse/SPARK-36307
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on 
the first set of 20.

{code}
invalidInsertIntoError
insertOverwriteDirectoryUnsupportedError
columnAliasInOperationNotAllowedError
emptySourceForMergeError
unrecognizedMatchedActionError
insertedValueNumberNotMatchFieldNumberError
unrecognizedNotMatchedActionError
mergeStatementWithoutWhenClauseError
nonLastMatchedClauseOmitConditionError
nonLastNotMatchedClauseOmitConditionError
emptyPartitionKeyError
combinationQueryResultClausesUnsupportedError
distributeByUnsupportedError
transformNotSupportQuantifierError
transformWithSerdeUnsupportedError
lateralWithPivotInFromClauseNotAllowedError
lateralJoinWithNaturalJoinUnsupportedError
lateralJoinWithUsingJoinUnsupportedError
unsupportedLateralJoinTypeError
invalidLateralJoinRelationError
{code}

For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36108) Refactor first set of 20 query parsing errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36108:
---
Description: 
Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on 
the first set of 20.

{code}
invalidInsertIntoError
insertOverwriteDirectoryUnsupportedError
columnAliasInOperationNotAllowedError
emptySourceForMergeError
unrecognizedMatchedActionError
insertedValueNumberNotMatchFieldNumberError
unrecognizedNotMatchedActionError
mergeStatementWithoutWhenClauseError
nonLastMatchedClauseOmitConditionError
nonLastNotMatchedClauseOmitConditionError
emptyPartitionKeyError
combinationQueryResultClausesUnsupportedError
distributeByUnsupportedError
transformNotSupportQuantifierError
transformWithSerdeUnsupportedError
lateralWithPivotInFromClauseNotAllowedError
lateralJoinWithNaturalJoinUnsupportedError
lateralJoinWithUsingJoinUnsupportedError
unsupportedLateralJoinTypeError
invalidLateralJoinRelationError
{code}

For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on a 
few.

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

Summary: Refactor first set of 20 query parsing errors to use error 
classes  (was: Refactor a few query parsing errors to use error classes)

> Refactor first set of 20 query parsing errors to use error classes
> --
>
> Key: SPARK-36108
> URL: https://issues.apache.org/jira/browse/SPARK-36108
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
>  to use error classes.
> There are currently ~100 exceptions in this file; so this PR only focuses on 
> the first set of 20.
> {code}
> invalidInsertIntoError
> insertOverwriteDirectoryUnsupportedError
> columnAliasInOperationNotAllowedError
> emptySourceForMergeError
> unrecognizedMatchedActionError
> insertedValueNumberNotMatchFieldNumberError
> unrecognizedNotMatchedActionError
> mergeStatementWithoutWhenClauseError
> nonLastMatchedClauseOmitConditionError
> nonLastNotMatchedClauseOmitConditionError
> emptyPartitionKeyError
> combinationQueryResultClausesUnsupportedError
> distributeByUnsupportedError
> transformNotSupportQuantifierError
> transformWithSerdeUnsupportedError
> lateralWithPivotInFromClauseNotAllowedError
> lateralJoinWithNaturalJoinUnsupportedError
> lateralJoinWithUsingJoinUnsupportedError
> unsupportedLateralJoinTypeError
> invalidLateralJoinRelationError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36306) Refactor seventeenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36306:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the seventeenth set of 20.
{code:java}
legacyCheckpointDirectoryExistsError
subprocessExitedError
outputDataTypeUnsupportedByNodeWithoutSerdeError
invalidStartIndexError
concurrentModificationOnExternalAppendOnlyUnsafeRowArrayError
doExecuteBroadcastNotImplementedError
databaseNameConflictWithSystemPreservedDatabaseError
commentOnTableUnsupportedError
unsupportedUpdateColumnNullabilityError
renameColumnUnsupportedForOlderMySQLError
failedToExecuteQueryError
nestedFieldUnsupportedError
transformationsAndActionsNotInvokedByDriverError
repeatedPivotsUnsupportedError
pivotNotAfterGroupByUnsupportedError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the sixteenth set of 20.
{code:java}
cannotDropMultiPartitionsOnNonatomicPartitionTableError
truncateMultiPartitionUnsupportedError
overwriteTableByUnsupportedExpressionError
dynamicPartitionOverwriteUnsupportedByTableError
failedMergingSchemaError
cannotBroadcastTableOverMaxTableRowsError
cannotBroadcastTableOverMaxTableBytesError
notEnoughMemoryToBuildAndBroadcastTableError
executeCodePathUnsupportedError
cannotMergeClassWithOtherClassError
continuousProcessingUnsupportedByDataSourceError
failedToReadDataError
failedToGenerateEpochMarkerError
foreachWriterAbortedDueToTaskFailureError
integerOverflowError
failedToReadDeltaFileError
failedToReadSnapshotFileError
cannotPurgeAsBreakInternalStateError
cleanUpSourceFilesUnsupportedError
latestOffsetNotCalledError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor seventeenth set of 20 query execution errors to use error classes
> --
>
> Key: SPARK-36306
> URL: https://issues.apache.org/jira/browse/SPARK-36306
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the seventeenth set of 20.
> {code:java}
> legacyCheckpointDirectoryExistsError
> subprocessExitedError
> outputDataTypeUnsupportedByNodeWithoutSerdeError
> invalidStartIndexError
> concurrentModificationOnExternalAppendOnlyUnsafeRowArrayError
> doExecuteBroadcastNotImplementedError
> databaseNameConflictWithSystemPreservedDatabaseError
> commentOnTableUnsupportedError
> unsupportedUpdateColumnNullabilityError
> renameColumnUnsupportedForOlderMySQLError
> failedToExecuteQueryError
> nestedFieldUnsupportedError
> transformationsAndActionsNotInvokedByDriverError
> repeatedPivotsUnsupportedError
> pivotNotAfterGroupByUnsupportedError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36304) Refactor fifteenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36304:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fifteenth set of 20.
{code:java}
unsupportedOperationExceptionError
nullLiteralsCannotBeCastedError
notUserDefinedTypeError
cannotLoadUserDefinedTypeError
timeZoneIdNotSpecifiedForTimestampTypeError
notPublicClassError
primitiveTypesNotSupportedError
fieldIndexOnRowWithoutSchemaError
valueIsNullError
onlySupportDataSourcesProvidingFileFormatError
failToSetOriginalPermissionBackError
failToSetOriginalACLBackError
multiFailuresInStageMaterializationError
unrecognizedCompressionSchemaTypeIDError
getParentLoggerNotImplementedError
cannotCreateParquetConverterForTypeError
cannotCreateParquetConverterForDecimalTypeError
cannotCreateParquetConverterForDataTypeError
cannotAddMultiPartitionsOnNonatomicPartitionTableError
userSpecifiedSchemaUnsupportedByDataSourceError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fourteenth set of 20.
{code:java}
cannotGetEventTimeWatermarkError
cannotSetTimeoutTimestampError
batchMetadataFileNotFoundError
multiStreamingQueriesUsingPathConcurrentlyError
addFilesWithAbsolutePathUnsupportedError
microBatchUnsupportedByDataSourceError
cannotExecuteStreamingRelationExecError
invalidStreamingOutputModeError
catalogPluginClassNotFoundError
catalogPluginClassNotImplementedError
catalogPluginClassNotFoundForCatalogError
catalogFailToFindPublicNoArgConstructorError
catalogFailToCallPublicNoArgConstructorError
cannotInstantiateAbstractCatalogPluginClassError
failedToInstantiateConstructorForCatalogError
noSuchElementExceptionError
noSuchElementExceptionError
cannotMutateReadOnlySQLConfError
cannotCloneOrCopyReadOnlySQLConfError
cannotGetSQLConfInSchedulerEventLoopThreadError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor fifteenth set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36304
> URL: https://issues.apache.org/jira/browse/SPARK-36304
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the fifteenth set of 20.
> {code:java}
> unsupportedOperationExceptionError
> nullLiteralsCannotBeCastedError
> notUserDefinedTypeError
> cannotLoadUserDefinedTypeError
> timeZoneIdNotSpecifiedForTimestampTypeError
> notPublicClassError
> primitiveTypesNotSupportedError
> fieldIndexOnRowWithoutSchemaError
> valueIsNullError
> onlySupportDataSourcesProvidingFileFormatError
> failToSetOriginalPermissionBackError
> failToSetOriginalACLBackError
> multiFailuresInStageMaterializationError
> unrecognizedCompressionSchemaTypeIDError
> getParentLoggerNotImplementedError
> cannotCreateParquetConverterForTypeError
> cannotCreateParquetConverterForDecimalTypeError
> cannotCreateParquetConverterForDataTypeError
> cannotAddMultiPartitionsOnNonatomicPartitionTableError
> userSpecifiedSchemaUnsupportedByDataSourceError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36306) Refactor seventeenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36306:
--

 Summary: Refactor seventeenth set of 20 query execution errors to 
use error classes
 Key: SPARK-36306
 URL: https://issues.apache.org/jira/browse/SPARK-36306
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the sixteenth set of 20.
{code:java}
cannotDropMultiPartitionsOnNonatomicPartitionTableError
truncateMultiPartitionUnsupportedError
overwriteTableByUnsupportedExpressionError
dynamicPartitionOverwriteUnsupportedByTableError
failedMergingSchemaError
cannotBroadcastTableOverMaxTableRowsError
cannotBroadcastTableOverMaxTableBytesError
notEnoughMemoryToBuildAndBroadcastTableError
executeCodePathUnsupportedError
cannotMergeClassWithOtherClassError
continuousProcessingUnsupportedByDataSourceError
failedToReadDataError
failedToGenerateEpochMarkerError
foreachWriterAbortedDueToTaskFailureError
integerOverflowError
failedToReadDeltaFileError
failedToReadSnapshotFileError
cannotPurgeAsBreakInternalStateError
cleanUpSourceFilesUnsupportedError
latestOffsetNotCalledError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36305) Refactor sixteenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36305:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the sixteenth set of 20.
{code:java}
cannotDropMultiPartitionsOnNonatomicPartitionTableError
truncateMultiPartitionUnsupportedError
overwriteTableByUnsupportedExpressionError
dynamicPartitionOverwriteUnsupportedByTableError
failedMergingSchemaError
cannotBroadcastTableOverMaxTableRowsError
cannotBroadcastTableOverMaxTableBytesError
notEnoughMemoryToBuildAndBroadcastTableError
executeCodePathUnsupportedError
cannotMergeClassWithOtherClassError
continuousProcessingUnsupportedByDataSourceError
failedToReadDataError
failedToGenerateEpochMarkerError
foreachWriterAbortedDueToTaskFailureError
integerOverflowError
failedToReadDeltaFileError
failedToReadSnapshotFileError
cannotPurgeAsBreakInternalStateError
cleanUpSourceFilesUnsupportedError
latestOffsetNotCalledError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fifteenth set of 20.
{code:java}
unsupportedOperationExceptionError
nullLiteralsCannotBeCastedError
notUserDefinedTypeError
cannotLoadUserDefinedTypeError
timeZoneIdNotSpecifiedForTimestampTypeError
notPublicClassError
primitiveTypesNotSupportedError
fieldIndexOnRowWithoutSchemaError
valueIsNullError
onlySupportDataSourcesProvidingFileFormatError
failToSetOriginalPermissionBackError
failToSetOriginalACLBackError
multiFailuresInStageMaterializationError
unrecognizedCompressionSchemaTypeIDError
getParentLoggerNotImplementedError
cannotCreateParquetConverterForTypeError
cannotCreateParquetConverterForDecimalTypeError
cannotCreateParquetConverterForDataTypeError
cannotAddMultiPartitionsOnNonatomicPartitionTableError
userSpecifiedSchemaUnsupportedByDataSourceError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor sixteenth set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36305
> URL: https://issues.apache.org/jira/browse/SPARK-36305
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the sixteenth set of 20.
> {code:java}
> cannotDropMultiPartitionsOnNonatomicPartitionTableError
> truncateMultiPartitionUnsupportedError
> overwriteTableByUnsupportedExpressionError
> dynamicPartitionOverwriteUnsupportedByTableError
> failedMergingSchemaError
> cannotBroadcastTableOverMaxTableRowsError
> cannotBroadcastTableOverMaxTableBytesError
> notEnoughMemoryToBuildAndBroadcastTableError
> executeCodePathUnsupportedError
> cannotMergeClassWithOtherClassError
> continuousProcessingUnsupportedByDataSourceError
> failedToReadDataError
> failedToGenerateEpochMarkerError
> foreachWriterAbortedDueToTaskFailureError
> integerOverflowError
> failedToReadDeltaFileError
> failedToReadSnapshotFileError
> cannotPurgeAsBreakInternalStateError
> cleanUpSourceFilesUnsupportedError
> latestOffsetNotCalledError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36305) Refactor sixteenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36305:
--

 Summary: Refactor sixteenth set of 20 query execution errors to 
use error classes
 Key: SPARK-36305
 URL: https://issues.apache.org/jira/browse/SPARK-36305
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fifteenth set of 20.
{code:java}
unsupportedOperationExceptionError
nullLiteralsCannotBeCastedError
notUserDefinedTypeError
cannotLoadUserDefinedTypeError
timeZoneIdNotSpecifiedForTimestampTypeError
notPublicClassError
primitiveTypesNotSupportedError
fieldIndexOnRowWithoutSchemaError
valueIsNullError
onlySupportDataSourcesProvidingFileFormatError
failToSetOriginalPermissionBackError
failToSetOriginalACLBackError
multiFailuresInStageMaterializationError
unrecognizedCompressionSchemaTypeIDError
getParentLoggerNotImplementedError
cannotCreateParquetConverterForTypeError
cannotCreateParquetConverterForDecimalTypeError
cannotCreateParquetConverterForDataTypeError
cannotAddMultiPartitionsOnNonatomicPartitionTableError
userSpecifiedSchemaUnsupportedByDataSourceError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36303) Refactor fourteenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36303:
--

 Summary: Refactor fourteenth set of 20 query execution errors to 
use error classes
 Key: SPARK-36303
 URL: https://issues.apache.org/jira/browse/SPARK-36303
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the thirteenth set of 20.
{code:java}
serDeInterfaceNotFoundError
convertHiveTableToCatalogTableError
cannotRecognizeHiveTypeError
getTablesByTypeUnsupportedByHiveVersionError
dropTableWithPurgeUnsupportedError
alterTableWithDropPartitionAndPurgeUnsupportedError
invalidPartitionFilterError
getPartitionMetadataByFilterError
unsupportedHiveMetastoreVersionError
loadHiveClientCausesNoClassDefFoundError
cannotFetchTablesOfDatabaseError
illegalLocationClauseForViewPartitionError
renamePathAsExistsPathError
renameAsExistsPathError
renameSrcPathNotFoundError
failedRenameTempFileError
legacyMetadataPathExistsError
partitionColumnNotFoundInSchemaError
stateNotDefinedOrAlreadyRemovedError
cannotSetTimeoutDurationError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36303) Refactor fourteenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36303:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fourteenth set of 20.
{code:java}
cannotGetEventTimeWatermarkError
cannotSetTimeoutTimestampError
batchMetadataFileNotFoundError
multiStreamingQueriesUsingPathConcurrentlyError
addFilesWithAbsolutePathUnsupportedError
microBatchUnsupportedByDataSourceError
cannotExecuteStreamingRelationExecError
invalidStreamingOutputModeError
catalogPluginClassNotFoundError
catalogPluginClassNotImplementedError
catalogPluginClassNotFoundForCatalogError
catalogFailToFindPublicNoArgConstructorError
catalogFailToCallPublicNoArgConstructorError
cannotInstantiateAbstractCatalogPluginClassError
failedToInstantiateConstructorForCatalogError
noSuchElementExceptionError
noSuchElementExceptionError
cannotMutateReadOnlySQLConfError
cannotCloneOrCopyReadOnlySQLConfError
cannotGetSQLConfInSchedulerEventLoopThreadError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the thirteenth set of 20.
{code:java}
serDeInterfaceNotFoundError
convertHiveTableToCatalogTableError
cannotRecognizeHiveTypeError
getTablesByTypeUnsupportedByHiveVersionError
dropTableWithPurgeUnsupportedError
alterTableWithDropPartitionAndPurgeUnsupportedError
invalidPartitionFilterError
getPartitionMetadataByFilterError
unsupportedHiveMetastoreVersionError
loadHiveClientCausesNoClassDefFoundError
cannotFetchTablesOfDatabaseError
illegalLocationClauseForViewPartitionError
renamePathAsExistsPathError
renameAsExistsPathError
renameSrcPathNotFoundError
failedRenameTempFileError
legacyMetadataPathExistsError
partitionColumnNotFoundInSchemaError
stateNotDefinedOrAlreadyRemovedError
cannotSetTimeoutDurationError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor fourteenth set of 20 query execution errors to use error classes
> -
>
> Key: SPARK-36303
> URL: https://issues.apache.org/jira/browse/SPARK-36303
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the fourteenth set of 20.
> {code:java}
> cannotGetEventTimeWatermarkError
> cannotSetTimeoutTimestampError
> batchMetadataFileNotFoundError
> multiStreamingQueriesUsingPathConcurrentlyError
> addFilesWithAbsolutePathUnsupportedError
> microBatchUnsupportedByDataSourceError
> cannotExecuteStreamingRelationExecError
> invalidStreamingOutputModeError
> catalogPluginClassNotFoundError
> catalogPluginClassNotImplementedError
> catalogPluginClassNotFoundForCatalogError
> catalogFailToFindPublicNoArgConstructorError
> catalogFailToCallPublicNoArgConstructorError
> cannotInstantiateAbstractCatalogPluginClassError
> failedToInstantiateConstructorForCatalogError
> noSuchElementExceptionError
> noSuchElementExceptionError
> cannotMutateReadOnlySQLConfError
> cannotCloneOrCopyReadOnlySQLConfError
> cannotGetSQLConfInSchedulerEventLoopThreadError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36304) Refactor fifteenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36304:
--

 Summary: Refactor fifteenth set of 20 query execution errors to 
use error classes
 Key: SPARK-36304
 URL: https://issues.apache.org/jira/browse/SPARK-36304
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fourteenth set of 20.
{code:java}
cannotGetEventTimeWatermarkError
cannotSetTimeoutTimestampError
batchMetadataFileNotFoundError
multiStreamingQueriesUsingPathConcurrentlyError
addFilesWithAbsolutePathUnsupportedError
microBatchUnsupportedByDataSourceError
cannotExecuteStreamingRelationExecError
invalidStreamingOutputModeError
catalogPluginClassNotFoundError
catalogPluginClassNotImplementedError
catalogPluginClassNotFoundForCatalogError
catalogFailToFindPublicNoArgConstructorError
catalogFailToCallPublicNoArgConstructorError
cannotInstantiateAbstractCatalogPluginClassError
failedToInstantiateConstructorForCatalogError
noSuchElementExceptionError
noSuchElementExceptionError
cannotMutateReadOnlySQLConfError
cannotCloneOrCopyReadOnlySQLConfError
cannotGetSQLConfInSchedulerEventLoopThreadError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36301) Refactor twelfth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36301:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the twelfth set of 20.
{code:java}
cannotRewriteDomainJoinWithConditionsError
decorrelateInnerQueryThroughPlanUnsupportedError
methodCalledInAnalyzerNotAllowedError
cannotSafelyMergeSerdePropertiesError
pairUnsupportedAtFunctionError
onceStrategyIdempotenceIsBrokenForBatchError[TreeType
structuralIntegrityOfInputPlanIsBrokenInClassError
structuralIntegrityIsBrokenAfterApplyingRuleError
ruleIdNotFoundForRuleError
cannotCreateArrayWithElementsExceedLimitError
indexOutOfBoundsOfArrayDataError
malformedRecordsDetectedInRecordParsingError
remoteOperationsUnsupportedError
invalidKerberosConfigForHiveServer2Error
parentSparkUIToAttachTabNotFoundError
inferSchemaUnsupportedForHiveError
requestedPartitionsMismatchTablePartitionsError
dynamicPartitionKeyNotAmongWrittenPartitionPathsError
cannotRemovePartitionDirError
cannotCreateStagingDirError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the eleventh set of 20.
{code:java}
expressionDecodingError
expressionEncodingError
classHasUnexpectedSerializerError
cannotGetOuterPointerForInnerClassError
userDefinedTypeNotAnnotatedAndRegisteredError
invalidInputSyntaxForBooleanError
unsupportedOperandTypeForSizeFunctionError
unexpectedValueForStartInFunctionError
unexpectedValueForLengthInFunctionError
sqlArrayIndexNotStartAtOneError
concatArraysWithElementsExceedLimitError
flattenArraysWithElementsExceedLimitError
createArrayWithElementsExceedLimitError
unionArrayWithElementsExceedLimitError
initialTypeNotTargetDataTypeError
initialTypeNotTargetDataTypesError
cannotConvertColumnToJSONError
malformedRecordsDetectedInSchemaInferenceError
malformedJSONError
malformedRecordsDetectedInSchemaInferenceError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor twelfth set of 20 query execution errors to use error classes
> --
>
> Key: SPARK-36301
> URL: https://issues.apache.org/jira/browse/SPARK-36301
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the twelfth set of 20.
> {code:java}
> cannotRewriteDomainJoinWithConditionsError
> decorrelateInnerQueryThroughPlanUnsupportedError
> methodCalledInAnalyzerNotAllowedError
> cannotSafelyMergeSerdePropertiesError
> pairUnsupportedAtFunctionError
> onceStrategyIdempotenceIsBrokenForBatchError[TreeType
> structuralIntegrityOfInputPlanIsBrokenInClassError
> structuralIntegrityIsBrokenAfterApplyingRuleError
> ruleIdNotFoundForRuleError
> cannotCreateArrayWithElementsExceedLimitError
> indexOutOfBoundsOfArrayDataError
> malformedRecordsDetectedInRecordParsingError
> remoteOperationsUnsupportedError
> invalidKerberosConfigForHiveServer2Error
> parentSparkUIToAttachTabNotFoundError
> inferSchemaUnsupportedForHiveError
> requestedPartitionsMismatchTablePartitionsError
> dynamicPartitionKeyNotAmongWrittenPartitionPathsError
> cannotRemovePartitionDirError
> cannotCreateStagingDirError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36302) Refactor thirteenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36302:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the thirteenth set of 20.
{code:java}
serDeInterfaceNotFoundError
convertHiveTableToCatalogTableError
cannotRecognizeHiveTypeError
getTablesByTypeUnsupportedByHiveVersionError
dropTableWithPurgeUnsupportedError
alterTableWithDropPartitionAndPurgeUnsupportedError
invalidPartitionFilterError
getPartitionMetadataByFilterError
unsupportedHiveMetastoreVersionError
loadHiveClientCausesNoClassDefFoundError
cannotFetchTablesOfDatabaseError
illegalLocationClauseForViewPartitionError
renamePathAsExistsPathError
renameAsExistsPathError
renameSrcPathNotFoundError
failedRenameTempFileError
legacyMetadataPathExistsError
partitionColumnNotFoundInSchemaError
stateNotDefinedOrAlreadyRemovedError
cannotSetTimeoutDurationError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the twelfth set of 20.
{code:java}
cannotRewriteDomainJoinWithConditionsError
decorrelateInnerQueryThroughPlanUnsupportedError
methodCalledInAnalyzerNotAllowedError
cannotSafelyMergeSerdePropertiesError
pairUnsupportedAtFunctionError
onceStrategyIdempotenceIsBrokenForBatchError[TreeType
structuralIntegrityOfInputPlanIsBrokenInClassError
structuralIntegrityIsBrokenAfterApplyingRuleError
ruleIdNotFoundForRuleError
cannotCreateArrayWithElementsExceedLimitError
indexOutOfBoundsOfArrayDataError
malformedRecordsDetectedInRecordParsingError
remoteOperationsUnsupportedError
invalidKerberosConfigForHiveServer2Error
parentSparkUIToAttachTabNotFoundError
inferSchemaUnsupportedForHiveError
requestedPartitionsMismatchTablePartitionsError
dynamicPartitionKeyNotAmongWrittenPartitionPathsError
cannotRemovePartitionDirError
cannotCreateStagingDirError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor thirteenth set of 20 query execution errors to use error classes
> -
>
> Key: SPARK-36302
> URL: https://issues.apache.org/jira/browse/SPARK-36302
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the thirteenth set of 20.
> {code:java}
> serDeInterfaceNotFoundError
> convertHiveTableToCatalogTableError
> cannotRecognizeHiveTypeError
> getTablesByTypeUnsupportedByHiveVersionError
> dropTableWithPurgeUnsupportedError
> alterTableWithDropPartitionAndPurgeUnsupportedError
> invalidPartitionFilterError
> getPartitionMetadataByFilterError
> unsupportedHiveMetastoreVersionError
> loadHiveClientCausesNoClassDefFoundError
> cannotFetchTablesOfDatabaseError
> illegalLocationClauseForViewPartitionError
> renamePathAsExistsPathError
> renameAsExistsPathError
> renameSrcPathNotFoundError
> failedRenameTempFileError
> legacyMetadataPathExistsError
> partitionColumnNotFoundInSchemaError
> stateNotDefinedOrAlreadyRemovedError
> cannotSetTimeoutDurationError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36302) Refactor thirteenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36302:
--

 Summary: Refactor thirteenth set of 20 query execution errors to 
use error classes
 Key: SPARK-36302
 URL: https://issues.apache.org/jira/browse/SPARK-36302
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the twelfth set of 20.
{code:java}
cannotRewriteDomainJoinWithConditionsError
decorrelateInnerQueryThroughPlanUnsupportedError
methodCalledInAnalyzerNotAllowedError
cannotSafelyMergeSerdePropertiesError
pairUnsupportedAtFunctionError
onceStrategyIdempotenceIsBrokenForBatchError[TreeType
structuralIntegrityOfInputPlanIsBrokenInClassError
structuralIntegrityIsBrokenAfterApplyingRuleError
ruleIdNotFoundForRuleError
cannotCreateArrayWithElementsExceedLimitError
indexOutOfBoundsOfArrayDataError
malformedRecordsDetectedInRecordParsingError
remoteOperationsUnsupportedError
invalidKerberosConfigForHiveServer2Error
parentSparkUIToAttachTabNotFoundError
inferSchemaUnsupportedForHiveError
requestedPartitionsMismatchTablePartitionsError
dynamicPartitionKeyNotAmongWrittenPartitionPathsError
cannotRemovePartitionDirError
cannotCreateStagingDirError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36301) Refactor twelfth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36301:
--

 Summary: Refactor twelfth set of 20 query execution errors to use 
error classes
 Key: SPARK-36301
 URL: https://issues.apache.org/jira/browse/SPARK-36301
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the eleventh set of 20.
{code:java}
expressionDecodingError
expressionEncodingError
classHasUnexpectedSerializerError
cannotGetOuterPointerForInnerClassError
userDefinedTypeNotAnnotatedAndRegisteredError
invalidInputSyntaxForBooleanError
unsupportedOperandTypeForSizeFunctionError
unexpectedValueForStartInFunctionError
unexpectedValueForLengthInFunctionError
sqlArrayIndexNotStartAtOneError
concatArraysWithElementsExceedLimitError
flattenArraysWithElementsExceedLimitError
createArrayWithElementsExceedLimitError
unionArrayWithElementsExceedLimitError
initialTypeNotTargetDataTypeError
initialTypeNotTargetDataTypesError
cannotConvertColumnToJSONError
malformedRecordsDetectedInSchemaInferenceError
malformedJSONError
malformedRecordsDetectedInSchemaInferenceError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36300) Refactor eleventh set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36300:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the eleventh set of 20.
{code:java}
expressionDecodingError
expressionEncodingError
classHasUnexpectedSerializerError
cannotGetOuterPointerForInnerClassError
userDefinedTypeNotAnnotatedAndRegisteredError
invalidInputSyntaxForBooleanError
unsupportedOperandTypeForSizeFunctionError
unexpectedValueForStartInFunctionError
unexpectedValueForLengthInFunctionError
sqlArrayIndexNotStartAtOneError
concatArraysWithElementsExceedLimitError
flattenArraysWithElementsExceedLimitError
createArrayWithElementsExceedLimitError
unionArrayWithElementsExceedLimitError
initialTypeNotTargetDataTypeError
initialTypeNotTargetDataTypesError
cannotConvertColumnToJSONError
malformedRecordsDetectedInSchemaInferenceError
malformedJSONError
malformedRecordsDetectedInSchemaInferenceError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the tenth set of 20.
{code:java}
registeringStreamingQueryListenerError
concurrentQueryInstanceError
cannotParseJsonArraysAsStructsError
cannotParseStringAsDataTypeError
failToParseEmptyStringForDataTypeError
failToParseValueForDataTypeError
rootConverterReturnNullError
cannotHaveCircularReferencesInBeanClassError
cannotHaveCircularReferencesInClassError
cannotUseInvalidJavaIdentifierAsFieldNameError
cannotFindEncoderForTypeError
attributesForTypeUnsupportedError
schemaForTypeUnsupportedError
cannotFindConstructorForTypeError
paramExceedOneCharError
paramIsNotIntegerError
paramIsNotBooleanValueError
foundNullValueForNotNullableFieldError
malformedCSVRecordError
elementsOfTupleExceedLimitError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor eleventh set of 20 query execution errors to use error classes
> ---
>
> Key: SPARK-36300
> URL: https://issues.apache.org/jira/browse/SPARK-36300
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the eleventh set of 20.
> {code:java}
> expressionDecodingError
> expressionEncodingError
> classHasUnexpectedSerializerError
> cannotGetOuterPointerForInnerClassError
> userDefinedTypeNotAnnotatedAndRegisteredError
> invalidInputSyntaxForBooleanError
> unsupportedOperandTypeForSizeFunctionError
> unexpectedValueForStartInFunctionError
> unexpectedValueForLengthInFunctionError
> sqlArrayIndexNotStartAtOneError
> concatArraysWithElementsExceedLimitError
> flattenArraysWithElementsExceedLimitError
> createArrayWithElementsExceedLimitError
> unionArrayWithElementsExceedLimitError
> initialTypeNotTargetDataTypeError
> initialTypeNotTargetDataTypesError
> cannotConvertColumnToJSONError
> malformedRecordsDetectedInSchemaInferenceError
> malformedJSONError
> malformedRecordsDetectedInSchemaInferenceError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36298) Refactor ninth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36298:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the ninth set of 20.
{code:java}
unscaledValueTooLargeForPrecisionError
decimalPrecisionExceedsMaxPrecisionError
outOfDecimalTypeRangeError
unsupportedArrayTypeError
unsupportedJavaTypeError
failedParsingStructTypeError
failedMergingFieldsError
cannotMergeDecimalTypesWithIncompatiblePrecisionAndScaleError
cannotMergeDecimalTypesWithIncompatiblePrecisionError
cannotMergeDecimalTypesWithIncompatibleScaleError
cannotMergeIncompatibleDataTypesError
exceedMapSizeLimitError
duplicateMapKeyFoundError
mapDataKeyArrayLengthDiffersFromValueArrayLengthError
fieldDiffersFromDerivedLocalDateError
failToParseDateTimeInNewParserError
failToFormatDateTimeInNewFormatterError
failToRecognizePatternAfterUpgradeError
failToRecognizePatternError
cannotCastUTF8StringToDataTypeError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the eighth set of 20.
{code:java}
executeBroadcastTimeoutError
cannotCompareCostWithTargetCostError
unsupportedDataTypeError
notSupportTypeError
notSupportNonPrimitiveTypeError
unsupportedTypeError
useDictionaryEncodingWhenDictionaryOverflowError
endOfIteratorError
cannotAllocateMemoryToGrowBytesToBytesMapError
cannotAcquireMemoryToBuildLongHashedRelationError
cannotAcquireMemoryToBuildUnsafeHashedRelationError
rowLargerThan256MUnsupportedError
cannotBuildHashedRelationWithUniqueKeysExceededError
cannotBuildHashedRelationLargerThan8GError
failedToPushRowIntoRowQueueError
unexpectedWindowFunctionFrameError
cannotParseStatisticAsPercentileError
statisticNotRecognizedError
unknownColumnError
unexpectedAccumulableUpdateValueError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor ninth set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36298
> URL: https://issues.apache.org/jira/browse/SPARK-36298
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the ninth set of 20.
> {code:java}
> unscaledValueTooLargeForPrecisionError
> decimalPrecisionExceedsMaxPrecisionError
> outOfDecimalTypeRangeError
> unsupportedArrayTypeError
> unsupportedJavaTypeError
> failedParsingStructTypeError
> failedMergingFieldsError
> cannotMergeDecimalTypesWithIncompatiblePrecisionAndScaleError
> cannotMergeDecimalTypesWithIncompatiblePrecisionError
> cannotMergeDecimalTypesWithIncompatibleScaleError
> cannotMergeIncompatibleDataTypesError
> exceedMapSizeLimitError
> duplicateMapKeyFoundError
> mapDataKeyArrayLengthDiffersFromValueArrayLengthError
> fieldDiffersFromDerivedLocalDateError
> failToParseDateTimeInNewParserError
> failToFormatDateTimeInNewFormatterError
> failToRecognizePatternAfterUpgradeError
> failToRecognizePatternError
> cannotCastUTF8StringToDataTypeError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36299) Refactor tenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36299:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the tenth set of 20.
{code:java}
registeringStreamingQueryListenerError
concurrentQueryInstanceError
cannotParseJsonArraysAsStructsError
cannotParseStringAsDataTypeError
failToParseEmptyStringForDataTypeError
failToParseValueForDataTypeError
rootConverterReturnNullError
cannotHaveCircularReferencesInBeanClassError
cannotHaveCircularReferencesInClassError
cannotUseInvalidJavaIdentifierAsFieldNameError
cannotFindEncoderForTypeError
attributesForTypeUnsupportedError
schemaForTypeUnsupportedError
cannotFindConstructorForTypeError
paramExceedOneCharError
paramIsNotIntegerError
paramIsNotBooleanValueError
foundNullValueForNotNullableFieldError
malformedCSVRecordError
elementsOfTupleExceedLimitError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the ninth set of 20.
{code:java}
unscaledValueTooLargeForPrecisionError
decimalPrecisionExceedsMaxPrecisionError
outOfDecimalTypeRangeError
unsupportedArrayTypeError
unsupportedJavaTypeError
failedParsingStructTypeError
failedMergingFieldsError
cannotMergeDecimalTypesWithIncompatiblePrecisionAndScaleError
cannotMergeDecimalTypesWithIncompatiblePrecisionError
cannotMergeDecimalTypesWithIncompatibleScaleError
cannotMergeIncompatibleDataTypesError
exceedMapSizeLimitError
duplicateMapKeyFoundError
mapDataKeyArrayLengthDiffersFromValueArrayLengthError
fieldDiffersFromDerivedLocalDateError
failToParseDateTimeInNewParserError
failToFormatDateTimeInNewFormatterError
failToRecognizePatternAfterUpgradeError
failToRecognizePatternError
cannotCastUTF8StringToDataTypeError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor tenth set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36299
> URL: https://issues.apache.org/jira/browse/SPARK-36299
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the tenth set of 20.
> {code:java}
> registeringStreamingQueryListenerError
> concurrentQueryInstanceError
> cannotParseJsonArraysAsStructsError
> cannotParseStringAsDataTypeError
> failToParseEmptyStringForDataTypeError
> failToParseValueForDataTypeError
> rootConverterReturnNullError
> cannotHaveCircularReferencesInBeanClassError
> cannotHaveCircularReferencesInClassError
> cannotUseInvalidJavaIdentifierAsFieldNameError
> cannotFindEncoderForTypeError
> attributesForTypeUnsupportedError
> schemaForTypeUnsupportedError
> cannotFindConstructorForTypeError
> paramExceedOneCharError
> paramIsNotIntegerError
> paramIsNotBooleanValueError
> foundNullValueForNotNullableFieldError
> malformedCSVRecordError
> elementsOfTupleExceedLimitError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36299) Refactor tenth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36299:
--

 Summary: Refactor tenth set of 20 query execution errors to use 
error classes
 Key: SPARK-36299
 URL: https://issues.apache.org/jira/browse/SPARK-36299
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the ninth set of 20.
{code:java}
unscaledValueTooLargeForPrecisionError
decimalPrecisionExceedsMaxPrecisionError
outOfDecimalTypeRangeError
unsupportedArrayTypeError
unsupportedJavaTypeError
failedParsingStructTypeError
failedMergingFieldsError
cannotMergeDecimalTypesWithIncompatiblePrecisionAndScaleError
cannotMergeDecimalTypesWithIncompatiblePrecisionError
cannotMergeDecimalTypesWithIncompatibleScaleError
cannotMergeIncompatibleDataTypesError
exceedMapSizeLimitError
duplicateMapKeyFoundError
mapDataKeyArrayLengthDiffersFromValueArrayLengthError
fieldDiffersFromDerivedLocalDateError
failToParseDateTimeInNewParserError
failToFormatDateTimeInNewFormatterError
failToRecognizePatternAfterUpgradeError
failToRecognizePatternError
cannotCastUTF8StringToDataTypeError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36300) Refactor eleventh set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36300:
--

 Summary: Refactor eleventh set of 20 query execution errors to use 
error classes
 Key: SPARK-36300
 URL: https://issues.apache.org/jira/browse/SPARK-36300
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the tenth set of 20.
{code:java}
registeringStreamingQueryListenerError
concurrentQueryInstanceError
cannotParseJsonArraysAsStructsError
cannotParseStringAsDataTypeError
failToParseEmptyStringForDataTypeError
failToParseValueForDataTypeError
rootConverterReturnNullError
cannotHaveCircularReferencesInBeanClassError
cannotHaveCircularReferencesInClassError
cannotUseInvalidJavaIdentifierAsFieldNameError
cannotFindEncoderForTypeError
attributesForTypeUnsupportedError
schemaForTypeUnsupportedError
cannotFindConstructorForTypeError
paramExceedOneCharError
paramIsNotIntegerError
paramIsNotBooleanValueError
foundNullValueForNotNullableFieldError
malformedCSVRecordError
elementsOfTupleExceedLimitError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36297) Refactor eighth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36297:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the eighth set of 20.
{code:java}
executeBroadcastTimeoutError
cannotCompareCostWithTargetCostError
unsupportedDataTypeError
notSupportTypeError
notSupportNonPrimitiveTypeError
unsupportedTypeError
useDictionaryEncodingWhenDictionaryOverflowError
endOfIteratorError
cannotAllocateMemoryToGrowBytesToBytesMapError
cannotAcquireMemoryToBuildLongHashedRelationError
cannotAcquireMemoryToBuildUnsafeHashedRelationError
rowLargerThan256MUnsupportedError
cannotBuildHashedRelationWithUniqueKeysExceededError
cannotBuildHashedRelationLargerThan8GError
failedToPushRowIntoRowQueueError
unexpectedWindowFunctionFrameError
cannotParseStatisticAsPercentileError
statisticNotRecognizedError
unknownColumnError
unexpectedAccumulableUpdateValueError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the seventh set of 20.
{code:java}
missingJdbcTableNameAndQueryError
emptyOptionError
invalidJdbcTxnIsolationLevelError
cannotGetJdbcTypeError
unrecognizedSqlTypeError
unsupportedJdbcTypeError
unsupportedArrayElementTypeBasedOnBinaryError
nestedArraysUnsupportedError
cannotTranslateNonNullValueForFieldError
invalidJdbcNumPartitionsError
transactionUnsupportedByJdbcServerError
dataTypeUnsupportedYetError
unsupportedOperationForDataTypeError
inputFilterNotFullyConvertibleError
cannotReadFooterForFileError
cannotReadFooterForFileError
foundDuplicateFieldInCaseInsensitiveModeError
failedToMergeIncompatibleSchemasError
ddlUnsupportedTemporarilyError
operatingOnCanonicalizationPlanError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor eighth set of 20 query execution errors to use error classes
> -
>
> Key: SPARK-36297
> URL: https://issues.apache.org/jira/browse/SPARK-36297
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the eighth set of 20.
> {code:java}
> executeBroadcastTimeoutError
> cannotCompareCostWithTargetCostError
> unsupportedDataTypeError
> notSupportTypeError
> notSupportNonPrimitiveTypeError
> unsupportedTypeError
> useDictionaryEncodingWhenDictionaryOverflowError
> endOfIteratorError
> cannotAllocateMemoryToGrowBytesToBytesMapError
> cannotAcquireMemoryToBuildLongHashedRelationError
> cannotAcquireMemoryToBuildUnsafeHashedRelationError
> rowLargerThan256MUnsupportedError
> cannotBuildHashedRelationWithUniqueKeysExceededError
> cannotBuildHashedRelationLargerThan8GError
> failedToPushRowIntoRowQueueError
> unexpectedWindowFunctionFrameError
> cannotParseStatisticAsPercentileError
> statisticNotRecognizedError
> unknownColumnError
> unexpectedAccumulableUpdateValueError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36298) Refactor ninth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36298:
--

 Summary: Refactor ninth set of 20 query execution errors to use 
error classes
 Key: SPARK-36298
 URL: https://issues.apache.org/jira/browse/SPARK-36298
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the eighth set of 20.
{code:java}
executeBroadcastTimeoutError
cannotCompareCostWithTargetCostError
unsupportedDataTypeError
notSupportTypeError
notSupportNonPrimitiveTypeError
unsupportedTypeError
useDictionaryEncodingWhenDictionaryOverflowError
endOfIteratorError
cannotAllocateMemoryToGrowBytesToBytesMapError
cannotAcquireMemoryToBuildLongHashedRelationError
cannotAcquireMemoryToBuildUnsafeHashedRelationError
rowLargerThan256MUnsupportedError
cannotBuildHashedRelationWithUniqueKeysExceededError
cannotBuildHashedRelationLargerThan8GError
failedToPushRowIntoRowQueueError
unexpectedWindowFunctionFrameError
cannotParseStatisticAsPercentileError
statisticNotRecognizedError
unknownColumnError
unexpectedAccumulableUpdateValueError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36296) Refactor seventh set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36296:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the seventh set of 20.
{code:java}
missingJdbcTableNameAndQueryError
emptyOptionError
invalidJdbcTxnIsolationLevelError
cannotGetJdbcTypeError
unrecognizedSqlTypeError
unsupportedJdbcTypeError
unsupportedArrayElementTypeBasedOnBinaryError
nestedArraysUnsupportedError
cannotTranslateNonNullValueForFieldError
invalidJdbcNumPartitionsError
transactionUnsupportedByJdbcServerError
dataTypeUnsupportedYetError
unsupportedOperationForDataTypeError
inputFilterNotFullyConvertibleError
cannotReadFooterForFileError
cannotReadFooterForFileError
foundDuplicateFieldInCaseInsensitiveModeError
failedToMergeIncompatibleSchemasError
ddlUnsupportedTemporarilyError
operatingOnCanonicalizationPlanError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the sixth set of 20.
{code:java}
noRecordsFromEmptyDataReaderError
fileNotFoundError
unsupportedSchemaColumnConvertError
cannotReadParquetFilesError
cannotCreateColumnarReaderError
invalidNamespaceNameError
unsupportedPartitionTransformError
missingDatabaseLocationError
cannotRemoveReservedPropertyError
namespaceNotEmptyError
writingJobFailedError
writingJobAbortedError
commitDeniedError
unsupportedTableWritesError
cannotCreateJDBCTableWithPartitionsError
unsupportedUserSpecifiedSchemaError
writeUnsupportedForBinaryFileDataSourceError
fileLengthExceedsMaxLengthError
unsupportedFieldNameError
cannotSpecifyBothJdbcTableNameAndQueryError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor seventh set of 20 query execution errors to use error classes
> --
>
> Key: SPARK-36296
> URL: https://issues.apache.org/jira/browse/SPARK-36296
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the seventh set of 20.
> {code:java}
> missingJdbcTableNameAndQueryError
> emptyOptionError
> invalidJdbcTxnIsolationLevelError
> cannotGetJdbcTypeError
> unrecognizedSqlTypeError
> unsupportedJdbcTypeError
> unsupportedArrayElementTypeBasedOnBinaryError
> nestedArraysUnsupportedError
> cannotTranslateNonNullValueForFieldError
> invalidJdbcNumPartitionsError
> transactionUnsupportedByJdbcServerError
> dataTypeUnsupportedYetError
> unsupportedOperationForDataTypeError
> inputFilterNotFullyConvertibleError
> cannotReadFooterForFileError
> cannotReadFooterForFileError
> foundDuplicateFieldInCaseInsensitiveModeError
> failedToMergeIncompatibleSchemasError
> ddlUnsupportedTemporarilyError
> operatingOnCanonicalizationPlanError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36297) Refactor eighth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36297:
--

 Summary: Refactor eighth set of 20 query execution errors to use 
error classes
 Key: SPARK-36297
 URL: https://issues.apache.org/jira/browse/SPARK-36297
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the seventh set of 20.
{code:java}
missingJdbcTableNameAndQueryError
emptyOptionError
invalidJdbcTxnIsolationLevelError
cannotGetJdbcTypeError
unrecognizedSqlTypeError
unsupportedJdbcTypeError
unsupportedArrayElementTypeBasedOnBinaryError
nestedArraysUnsupportedError
cannotTranslateNonNullValueForFieldError
invalidJdbcNumPartitionsError
transactionUnsupportedByJdbcServerError
dataTypeUnsupportedYetError
unsupportedOperationForDataTypeError
inputFilterNotFullyConvertibleError
cannotReadFooterForFileError
cannotReadFooterForFileError
foundDuplicateFieldInCaseInsensitiveModeError
failedToMergeIncompatibleSchemasError
ddlUnsupportedTemporarilyError
operatingOnCanonicalizationPlanError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36296) Refactor seventh set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36296:
--

 Summary: Refactor seventh set of 20 query execution errors to use 
error classes
 Key: SPARK-36296
 URL: https://issues.apache.org/jira/browse/SPARK-36296
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the sixth set of 20.
{code:java}
noRecordsFromEmptyDataReaderError
fileNotFoundError
unsupportedSchemaColumnConvertError
cannotReadParquetFilesError
cannotCreateColumnarReaderError
invalidNamespaceNameError
unsupportedPartitionTransformError
missingDatabaseLocationError
cannotRemoveReservedPropertyError
namespaceNotEmptyError
writingJobFailedError
writingJobAbortedError
commitDeniedError
unsupportedTableWritesError
cannotCreateJDBCTableWithPartitionsError
unsupportedUserSpecifiedSchemaError
writeUnsupportedForBinaryFileDataSourceError
fileLengthExceedsMaxLengthError
unsupportedFieldNameError
cannotSpecifyBothJdbcTableNameAndQueryError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36295) Refactor sixth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36295:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the sixth set of 20.
{code:java}
noRecordsFromEmptyDataReaderError
fileNotFoundError
unsupportedSchemaColumnConvertError
cannotReadParquetFilesError
cannotCreateColumnarReaderError
invalidNamespaceNameError
unsupportedPartitionTransformError
missingDatabaseLocationError
cannotRemoveReservedPropertyError
namespaceNotEmptyError
writingJobFailedError
writingJobAbortedError
commitDeniedError
unsupportedTableWritesError
cannotCreateJDBCTableWithPartitionsError
unsupportedUserSpecifiedSchemaError
writeUnsupportedForBinaryFileDataSourceError
fileLengthExceedsMaxLengthError
unsupportedFieldNameError
cannotSpecifyBothJdbcTableNameAndQueryError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fifth set of 20.
{code:java}
createStreamingSourceNotSpecifySchemaError
streamedOperatorUnsupportedByDataSourceError
multiplePathsSpecifiedError
failedToFindDataSourceError
removedClassInSpark2Error
incompatibleDataSourceRegisterError
unrecognizedFileFormatError
sparkUpgradeInReadingDatesError
sparkUpgradeInWritingDatesError
buildReaderUnsupportedForFileFormatError
jobAbortedError
taskFailedWhileWritingRowsError
readCurrentFileNotFoundError
unsupportedSaveModeError
cannotClearOutputDirectoryError
cannotClearPartitionDirectoryError
failedToCastValueToDataTypeForPartitionColumnError
endOfStreamError
fallbackV1RelationReportsInconsistentSchemaError
cannotDropNonemptyNamespaceError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor sixth set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36295
> URL: https://issues.apache.org/jira/browse/SPARK-36295
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the sixth set of 20.
> {code:java}
> noRecordsFromEmptyDataReaderError
> fileNotFoundError
> unsupportedSchemaColumnConvertError
> cannotReadParquetFilesError
> cannotCreateColumnarReaderError
> invalidNamespaceNameError
> unsupportedPartitionTransformError
> missingDatabaseLocationError
> cannotRemoveReservedPropertyError
> namespaceNotEmptyError
> writingJobFailedError
> writingJobAbortedError
> commitDeniedError
> unsupportedTableWritesError
> cannotCreateJDBCTableWithPartitionsError
> unsupportedUserSpecifiedSchemaError
> writeUnsupportedForBinaryFileDataSourceError
> fileLengthExceedsMaxLengthError
> unsupportedFieldNameError
> cannotSpecifyBothJdbcTableNameAndQueryError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36295) Refactor sixth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36295:
--

 Summary: Refactor sixth set of 20 query execution errors to use 
error classes
 Key: SPARK-36295
 URL: https://issues.apache.org/jira/browse/SPARK-36295
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fifth set of 20.
{code:java}
createStreamingSourceNotSpecifySchemaError
streamedOperatorUnsupportedByDataSourceError
multiplePathsSpecifiedError
failedToFindDataSourceError
removedClassInSpark2Error
incompatibleDataSourceRegisterError
unrecognizedFileFormatError
sparkUpgradeInReadingDatesError
sparkUpgradeInWritingDatesError
buildReaderUnsupportedForFileFormatError
jobAbortedError
taskFailedWhileWritingRowsError
readCurrentFileNotFoundError
unsupportedSaveModeError
cannotClearOutputDirectoryError
cannotClearPartitionDirectoryError
failedToCastValueToDataTypeForPartitionColumnError
endOfStreamError
fallbackV1RelationReportsInconsistentSchemaError
cannotDropNonemptyNamespaceError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36294) Refactor fifth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36294:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fifth set of 20.
{code:java}
createStreamingSourceNotSpecifySchemaError
streamedOperatorUnsupportedByDataSourceError
multiplePathsSpecifiedError
failedToFindDataSourceError
removedClassInSpark2Error
incompatibleDataSourceRegisterError
unrecognizedFileFormatError
sparkUpgradeInReadingDatesError
sparkUpgradeInWritingDatesError
buildReaderUnsupportedForFileFormatError
jobAbortedError
taskFailedWhileWritingRowsError
readCurrentFileNotFoundError
unsupportedSaveModeError
cannotClearOutputDirectoryError
cannotClearPartitionDirectoryError
failedToCastValueToDataTypeForPartitionColumnError
endOfStreamError
fallbackV1RelationReportsInconsistentSchemaError
cannotDropNonemptyNamespaceError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fourth set of 20.
{code:java}
unableToCreateDatabaseAsFailedToCreateDirectoryError
unableToDropDatabaseAsFailedToDeleteDirectoryError
unableToCreateTableAsFailedToCreateDirectoryError
unableToDeletePartitionPathError
unableToDropTableAsFailedToDeleteDirectoryError
unableToRenameTableAsFailedToRenameDirectoryError
unableToCreatePartitionPathError
unableToRenamePartitionPathError
methodNotImplementedError
tableStatsNotSpecifiedError
unaryMinusCauseOverflowError
binaryArithmeticCauseOverflowError
failedSplitSubExpressionMsg
failedSplitSubExpressionError
failedToCompileMsg
internalCompilerError
compilerError
unsupportedTableChangeError
notADatasourceRDDPartitionError
dataPathNotSpecifiedError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor fifth set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36294
> URL: https://issues.apache.org/jira/browse/SPARK-36294
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the fifth set of 20.
> {code:java}
> createStreamingSourceNotSpecifySchemaError
> streamedOperatorUnsupportedByDataSourceError
> multiplePathsSpecifiedError
> failedToFindDataSourceError
> removedClassInSpark2Error
> incompatibleDataSourceRegisterError
> unrecognizedFileFormatError
> sparkUpgradeInReadingDatesError
> sparkUpgradeInWritingDatesError
> buildReaderUnsupportedForFileFormatError
> jobAbortedError
> taskFailedWhileWritingRowsError
> readCurrentFileNotFoundError
> unsupportedSaveModeError
> cannotClearOutputDirectoryError
> cannotClearPartitionDirectoryError
> failedToCastValueToDataTypeForPartitionColumnError
> endOfStreamError
> fallbackV1RelationReportsInconsistentSchemaError
> cannotDropNonemptyNamespaceError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36294) Refactor fifth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36294:
--

 Summary: Refactor fifth set of 20 query execution errors to use 
error classes
 Key: SPARK-36294
 URL: https://issues.apache.org/jira/browse/SPARK-36294
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fourth set of 20.
{code:java}
unableToCreateDatabaseAsFailedToCreateDirectoryError
unableToDropDatabaseAsFailedToDeleteDirectoryError
unableToCreateTableAsFailedToCreateDirectoryError
unableToDeletePartitionPathError
unableToDropTableAsFailedToDeleteDirectoryError
unableToRenameTableAsFailedToRenameDirectoryError
unableToCreatePartitionPathError
unableToRenamePartitionPathError
methodNotImplementedError
tableStatsNotSpecifiedError
unaryMinusCauseOverflowError
binaryArithmeticCauseOverflowError
failedSplitSubExpressionMsg
failedSplitSubExpressionError
failedToCompileMsg
internalCompilerError
compilerError
unsupportedTableChangeError
notADatasourceRDDPartitionError
dataPathNotSpecifiedError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36293) Refactor fourth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36293:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the fourth set of 20.
{code:java}
unableToCreateDatabaseAsFailedToCreateDirectoryError
unableToDropDatabaseAsFailedToDeleteDirectoryError
unableToCreateTableAsFailedToCreateDirectoryError
unableToDeletePartitionPathError
unableToDropTableAsFailedToDeleteDirectoryError
unableToRenameTableAsFailedToRenameDirectoryError
unableToCreatePartitionPathError
unableToRenamePartitionPathError
methodNotImplementedError
tableStatsNotSpecifiedError
unaryMinusCauseOverflowError
binaryArithmeticCauseOverflowError
failedSplitSubExpressionMsg
failedSplitSubExpressionError
failedToCompileMsg
internalCompilerError
compilerError
unsupportedTableChangeError
notADatasourceRDDPartitionError
dataPathNotSpecifiedError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the second set of 20.
{code:java}
inputTypeUnsupportedError
invalidFractionOfSecondError
overflowInSumOfDecimalError
overflowInIntegralDivideError
mapSizeExceedArraySizeWhenZipMapError
copyNullFieldNotAllowedError
literalTypeUnsupportedError
noDefaultForDataTypeError
doGenCodeOfAliasShouldNotBeCalledError
orderedOperationUnsupportedByDataTypeError
regexGroupIndexLessThanZeroError
regexGroupIndexExceedGroupCountError
invalidUrlError
dataTypeOperationUnsupportedError
mergeUnsupportedByWindowFunctionError
dataTypeUnexpectedError
typeUnsupportedError
negativeValueUnexpectedError
addNewFunctionMismatchedWithFunctionError
cannotGenerateCodeForUncomparableTypeError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor fourth set of 20 query execution errors to use error classes
> -
>
> Key: SPARK-36293
> URL: https://issues.apache.org/jira/browse/SPARK-36293
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the fourth set of 20.
> {code:java}
> unableToCreateDatabaseAsFailedToCreateDirectoryError
> unableToDropDatabaseAsFailedToDeleteDirectoryError
> unableToCreateTableAsFailedToCreateDirectoryError
> unableToDeletePartitionPathError
> unableToDropTableAsFailedToDeleteDirectoryError
> unableToRenameTableAsFailedToRenameDirectoryError
> unableToCreatePartitionPathError
> unableToRenamePartitionPathError
> methodNotImplementedError
> tableStatsNotSpecifiedError
> unaryMinusCauseOverflowError
> binaryArithmeticCauseOverflowError
> failedSplitSubExpressionMsg
> failedSplitSubExpressionError
> failedToCompileMsg
> internalCompilerError
> compilerError
> unsupportedTableChangeError
> notADatasourceRDDPartitionError
> dataPathNotSpecifiedError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36293) Refactor fourth set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36293:
--

 Summary: Refactor fourth set of 20 query execution errors to use 
error classes
 Key: SPARK-36293
 URL: https://issues.apache.org/jira/browse/SPARK-36293
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the second set of 20.
{code:java}
inputTypeUnsupportedError
invalidFractionOfSecondError
overflowInSumOfDecimalError
overflowInIntegralDivideError
mapSizeExceedArraySizeWhenZipMapError
copyNullFieldNotAllowedError
literalTypeUnsupportedError
noDefaultForDataTypeError
doGenCodeOfAliasShouldNotBeCalledError
orderedOperationUnsupportedByDataTypeError
regexGroupIndexLessThanZeroError
regexGroupIndexExceedGroupCountError
invalidUrlError
dataTypeOperationUnsupportedError
mergeUnsupportedByWindowFunctionError
dataTypeUnexpectedError
typeUnsupportedError
negativeValueUnexpectedError
addNewFunctionMismatchedWithFunctionError
cannotGenerateCodeForUncomparableTypeError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36107) Refactor first set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36107:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the first set of 20.
{code:java}
columnChangeUnsupportedError
logicalHintOperatorNotRemovedDuringAnalysisError
cannotEvaluateExpressionError
cannotGenerateCodeForExpressionError
cannotTerminateGeneratorError
castingCauseOverflowError
cannotChangeDecimalPrecisionError
invalidInputSyntaxForNumericError
cannotCastFromNullTypeError
cannotCastError
cannotParseDecimalError
simpleStringWithNodeIdUnsupportedError
evaluateUnevaluableAggregateUnsupportedError
dataTypeUnsupportedError
dataTypeUnsupportedError
failedExecuteUserDefinedFunctionError
divideByZeroError
invalidArrayIndexError
mapKeyNotExistError
rowFromCSVParserNotExpectedError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the first 20.

{code}
columnChangeUnsupportedError
logicalHintOperatorNotRemovedDuringAnalysisError
cannotEvaluateExpressionError
cannotGenerateCodeForExpressionError
cannotTerminateGeneratorError
castingCauseOverflowError
cannotChangeDecimalPrecisionError
invalidInputSyntaxForNumericError
cannotCastFromNullTypeError
cannotCastError
cannotParseDecimalError
simpleStringWithNodeIdUnsupportedError
evaluateUnevaluableAggregateUnsupportedError
dataTypeUnsupportedError
dataTypeUnsupportedError
failedExecuteUserDefinedFunctionError
divideByZeroError
invalidArrayIndexError
mapKeyNotExistError
rowFromCSVParserNotExpectedError
{code}

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].


> Refactor first set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36107
> URL: https://issues.apache.org/jira/browse/SPARK-36107
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the first set of 20.
> {code:java}
> columnChangeUnsupportedError
> logicalHintOperatorNotRemovedDuringAnalysisError
> cannotEvaluateExpressionError
> cannotGenerateCodeForExpressionError
> cannotTerminateGeneratorError
> castingCauseOverflowError
> cannotChangeDecimalPrecisionError
> invalidInputSyntaxForNumericError
> cannotCastFromNullTypeError
> cannotCastError
> cannotParseDecimalError
> simpleStringWithNodeIdUnsupportedError
> evaluateUnevaluableAggregateUnsupportedError
> dataTypeUnsupportedError
> dataTypeUnsupportedError
> failedExecuteUserDefinedFunctionError
> divideByZeroError
> invalidArrayIndexError
> mapKeyNotExistError
> rowFromCSVParserNotExpectedError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36291) Refactor second set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36291:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the second set of 20.
{code:java}
inputTypeUnsupportedError
invalidFractionOfSecondError
overflowInSumOfDecimalError
overflowInIntegralDivideError
mapSizeExceedArraySizeWhenZipMapError
copyNullFieldNotAllowedError
literalTypeUnsupportedError
noDefaultForDataTypeError
doGenCodeOfAliasShouldNotBeCalledError
orderedOperationUnsupportedByDataTypeError
regexGroupIndexLessThanZeroError
regexGroupIndexExceedGroupCountError
invalidUrlError
dataTypeOperationUnsupportedError
mergeUnsupportedByWindowFunctionError
dataTypeUnexpectedError
typeUnsupportedError
negativeValueUnexpectedError
addNewFunctionMismatchedWithFunctionError
cannotGenerateCodeForUncomparableTypeError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the second 20.
{code:java}
inputTypeUnsupportedError
invalidFractionOfSecondError
overflowInSumOfDecimalError
overflowInIntegralDivideError
mapSizeExceedArraySizeWhenZipMapError
copyNullFieldNotAllowedError
literalTypeUnsupportedError
noDefaultForDataTypeError
doGenCodeOfAliasShouldNotBeCalledError
orderedOperationUnsupportedByDataTypeError
regexGroupIndexLessThanZeroError
regexGroupIndexExceedGroupCountError
invalidUrlError
dataTypeOperationUnsupportedError
mergeUnsupportedByWindowFunctionError
dataTypeUnexpectedError
typeUnsupportedError
negativeValueUnexpectedError
addNewFunctionMismatchedWithFunctionError
cannotGenerateCodeForUncomparableTypeError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor second set of 20 query execution errors to use error classes
> -
>
> Key: SPARK-36291
> URL: https://issues.apache.org/jira/browse/SPARK-36291
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the second set of 20.
> {code:java}
> inputTypeUnsupportedError
> invalidFractionOfSecondError
> overflowInSumOfDecimalError
> overflowInIntegralDivideError
> mapSizeExceedArraySizeWhenZipMapError
> copyNullFieldNotAllowedError
> literalTypeUnsupportedError
> noDefaultForDataTypeError
> doGenCodeOfAliasShouldNotBeCalledError
> orderedOperationUnsupportedByDataTypeError
> regexGroupIndexLessThanZeroError
> regexGroupIndexExceedGroupCountError
> invalidUrlError
> dataTypeOperationUnsupportedError
> mergeUnsupportedByWindowFunctionError
> dataTypeUnexpectedError
> typeUnsupportedError
> negativeValueUnexpectedError
> addNewFunctionMismatchedWithFunctionError
> cannotGenerateCodeForUncomparableTypeError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36292) Refactor third set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36292:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the third set of 20.
{code:java}
cannotGenerateCodeForUnsupportedTypeError
cannotInterpolateClassIntoCodeBlockError
customCollectionClsNotResolvedError
classUnsupportedByMapObjectsError
nullAsMapKeyNotAllowedError
methodNotDeclaredError
constructorNotFoundError
primaryConstructorNotFoundError
unsupportedNaturalJoinTypeError
notExpectedUnresolvedEncoderError
unsupportedEncoderError
notOverrideExpectedMethodsError
failToConvertValueToJsonError
unexpectedOperatorInCorrelatedSubquery
unreachableError
unsupportedRoundingMode
resolveCannotHandleNestedSchema
inputExternalRowCannotBeNullError
fieldCannotBeNullMsg
fieldCannotBeNullError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the second 20.
{code:java}
cannotGenerateCodeForUnsupportedTypeError
cannotInterpolateClassIntoCodeBlockError
customCollectionClsNotResolvedError
classUnsupportedByMapObjectsError
nullAsMapKeyNotAllowedError
methodNotDeclaredError
constructorNotFoundError
primaryConstructorNotFoundError
unsupportedNaturalJoinTypeError
notExpectedUnresolvedEncoderError
unsupportedEncoderError
notOverrideExpectedMethodsError
failToConvertValueToJsonError
unexpectedOperatorInCorrelatedSubquery
unreachableError
unsupportedRoundingMode
resolveCannotHandleNestedSchema
inputExternalRowCannotBeNullError
fieldCannotBeNullMsg
fieldCannotBeNullError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor third set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36292
> URL: https://issues.apache.org/jira/browse/SPARK-36292
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the third set of 20.
> {code:java}
> cannotGenerateCodeForUnsupportedTypeError
> cannotInterpolateClassIntoCodeBlockError
> customCollectionClsNotResolvedError
> classUnsupportedByMapObjectsError
> nullAsMapKeyNotAllowedError
> methodNotDeclaredError
> constructorNotFoundError
> primaryConstructorNotFoundError
> unsupportedNaturalJoinTypeError
> notExpectedUnresolvedEncoderError
> unsupportedEncoderError
> notOverrideExpectedMethodsError
> failToConvertValueToJsonError
> unexpectedOperatorInCorrelatedSubquery
> unreachableError
> unsupportedRoundingMode
> resolveCannotHandleNestedSchema
> inputExternalRowCannotBeNullError
> fieldCannotBeNullMsg
> fieldCannotBeNullError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36291) Refactor second set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36291:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the second 20.
{code:java}
inputTypeUnsupportedError
invalidFractionOfSecondError
overflowInSumOfDecimalError
overflowInIntegralDivideError
mapSizeExceedArraySizeWhenZipMapError
copyNullFieldNotAllowedError
literalTypeUnsupportedError
noDefaultForDataTypeError
doGenCodeOfAliasShouldNotBeCalledError
orderedOperationUnsupportedByDataTypeError
regexGroupIndexLessThanZeroError
regexGroupIndexExceedGroupCountError
invalidUrlError
dataTypeOperationUnsupportedError
mergeUnsupportedByWindowFunctionError
dataTypeUnexpectedError
typeUnsupportedError
negativeValueUnexpectedError
addNewFunctionMismatchedWithFunctionError
cannotGenerateCodeForUncomparableTypeError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the second 20.
{code:java}
cannotGenerateCodeForUnsupportedTypeError
cannotInterpolateClassIntoCodeBlockError
customCollectionClsNotResolvedError
classUnsupportedByMapObjectsError
nullAsMapKeyNotAllowedError
methodNotDeclaredError
constructorNotFoundError
primaryConstructorNotFoundError
unsupportedNaturalJoinTypeError
notExpectedUnresolvedEncoderError
unsupportedEncoderError
notOverrideExpectedMethodsError
failToConvertValueToJsonError
unexpectedOperatorInCorrelatedSubquery
unreachableError
unsupportedRoundingMode
resolveCannotHandleNestedSchema
inputExternalRowCannotBeNullError
fieldCannotBeNullMsg
fieldCannotBeNullError
{code}
For more detail, see the parent ticket SPARK-36094.


> Refactor second set of 20 query execution errors to use error classes
> -
>
> Key: SPARK-36291
> URL: https://issues.apache.org/jira/browse/SPARK-36291
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the second 20.
> {code:java}
> inputTypeUnsupportedError
> invalidFractionOfSecondError
> overflowInSumOfDecimalError
> overflowInIntegralDivideError
> mapSizeExceedArraySizeWhenZipMapError
> copyNullFieldNotAllowedError
> literalTypeUnsupportedError
> noDefaultForDataTypeError
> doGenCodeOfAliasShouldNotBeCalledError
> orderedOperationUnsupportedByDataTypeError
> regexGroupIndexLessThanZeroError
> regexGroupIndexExceedGroupCountError
> invalidUrlError
> dataTypeOperationUnsupportedError
> mergeUnsupportedByWindowFunctionError
> dataTypeUnexpectedError
> typeUnsupportedError
> negativeValueUnexpectedError
> addNewFunctionMismatchedWithFunctionError
> cannotGenerateCodeForUncomparableTypeError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36292) Refactor third set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36292:
--

 Summary: Refactor third set of 20 query execution errors to use 
error classes
 Key: SPARK-36292
 URL: https://issues.apache.org/jira/browse/SPARK-36292
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the second 20.
{code:java}
cannotGenerateCodeForUnsupportedTypeError
cannotInterpolateClassIntoCodeBlockError
customCollectionClsNotResolvedError
classUnsupportedByMapObjectsError
nullAsMapKeyNotAllowedError
methodNotDeclaredError
constructorNotFoundError
primaryConstructorNotFoundError
unsupportedNaturalJoinTypeError
notExpectedUnresolvedEncoderError
unsupportedEncoderError
notOverrideExpectedMethodsError
failToConvertValueToJsonError
unexpectedOperatorInCorrelatedSubquery
unreachableError
unsupportedRoundingMode
resolveCannotHandleNestedSchema
inputExternalRowCannotBeNullError
fieldCannotBeNullMsg
fieldCannotBeNullError
{code}
For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36291) Refactor second set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36291:
---
Summary: Refactor second set of 20 query execution errors to use error 
classes  (was: Refactor second 20 query execution errors to use error classes)

> Refactor second set of 20 query execution errors to use error classes
> -
>
> Key: SPARK-36291
> URL: https://issues.apache.org/jira/browse/SPARK-36291
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the second 20.
> {code:java}
> cannotGenerateCodeForUnsupportedTypeError
> cannotInterpolateClassIntoCodeBlockError
> customCollectionClsNotResolvedError
> classUnsupportedByMapObjectsError
> nullAsMapKeyNotAllowedError
> methodNotDeclaredError
> constructorNotFoundError
> primaryConstructorNotFoundError
> unsupportedNaturalJoinTypeError
> notExpectedUnresolvedEncoderError
> unsupportedEncoderError
> notOverrideExpectedMethodsError
> failToConvertValueToJsonError
> unexpectedOperatorInCorrelatedSubquery
> unreachableError
> unsupportedRoundingMode
> resolveCannotHandleNestedSchema
> inputExternalRowCannotBeNullError
> fieldCannotBeNullMsg
> fieldCannotBeNullError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36107) Refactor first set of 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36107:
---
Summary: Refactor first set of 20 query execution errors to use error 
classes  (was: Refactor first 20 query execution errors to use error classes)

> Refactor first set of 20 query execution errors to use error classes
> 
>
> Key: SPARK-36107
> URL: https://issues.apache.org/jira/browse/SPARK-36107
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the first 20.
> {code}
> columnChangeUnsupportedError
> logicalHintOperatorNotRemovedDuringAnalysisError
> cannotEvaluateExpressionError
> cannotGenerateCodeForExpressionError
> cannotTerminateGeneratorError
> castingCauseOverflowError
> cannotChangeDecimalPrecisionError
> invalidInputSyntaxForNumericError
> cannotCastFromNullTypeError
> cannotCastError
> cannotParseDecimalError
> simpleStringWithNodeIdUnsupportedError
> evaluateUnevaluableAggregateUnsupportedError
> dataTypeUnsupportedError
> dataTypeUnsupportedError
> failedExecuteUserDefinedFunctionError
> divideByZeroError
> invalidArrayIndexError
> mapKeyNotExistError
> rowFromCSVParserNotExpectedError
> {code}
> For more detail, see the parent ticket 
> [SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36291) Refactor second 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36291:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the second 20.
{code:java}
cannotGenerateCodeForUnsupportedTypeError
cannotInterpolateClassIntoCodeBlockError
customCollectionClsNotResolvedError
classUnsupportedByMapObjectsError
nullAsMapKeyNotAllowedError
methodNotDeclaredError
constructorNotFoundError
primaryConstructorNotFoundError
unsupportedNaturalJoinTypeError
notExpectedUnresolvedEncoderError
unsupportedEncoderError
notOverrideExpectedMethodsError
failToConvertValueToJsonError
unexpectedOperatorInCorrelatedSubquery
unreachableError
unsupportedRoundingMode
resolveCannotHandleNestedSchema
inputExternalRowCannotBeNullError
fieldCannotBeNullMsg
fieldCannotBeNullError
{code}
For more detail, see the parent ticket SPARK-36094.

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the first 20.

{code}
columnChangeUnsupportedError
logicalHintOperatorNotRemovedDuringAnalysisError
cannotEvaluateExpressionError
cannotGenerateCodeForExpressionError
cannotTerminateGeneratorError
castingCauseOverflowError
cannotChangeDecimalPrecisionError
invalidInputSyntaxForNumericError
cannotCastFromNullTypeError
cannotCastError
cannotParseDecimalError
simpleStringWithNodeIdUnsupportedError
evaluateUnevaluableAggregateUnsupportedError
dataTypeUnsupportedError
dataTypeUnsupportedError
failedExecuteUserDefinedFunctionError
divideByZeroError
invalidArrayIndexError
mapKeyNotExistError
rowFromCSVParserNotExpectedError
{code}

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].


> Refactor second 20 query execution errors to use error classes
> --
>
> Key: SPARK-36291
> URL: https://issues.apache.org/jira/browse/SPARK-36291
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the second 20.
> {code:java}
> cannotGenerateCodeForUnsupportedTypeError
> cannotInterpolateClassIntoCodeBlockError
> customCollectionClsNotResolvedError
> classUnsupportedByMapObjectsError
> nullAsMapKeyNotAllowedError
> methodNotDeclaredError
> constructorNotFoundError
> primaryConstructorNotFoundError
> unsupportedNaturalJoinTypeError
> notExpectedUnresolvedEncoderError
> unsupportedEncoderError
> notOverrideExpectedMethodsError
> failToConvertValueToJsonError
> unexpectedOperatorInCorrelatedSubquery
> unreachableError
> unsupportedRoundingMode
> resolveCannotHandleNestedSchema
> inputExternalRowCannotBeNullError
> fieldCannotBeNullMsg
> fieldCannotBeNullError
> {code}
> For more detail, see the parent ticket SPARK-36094.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36107) Refactor first 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36107:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the first 20.

{code}
columnChangeUnsupportedError
logicalHintOperatorNotRemovedDuringAnalysisError
cannotEvaluateExpressionError
cannotGenerateCodeForExpressionError
cannotTerminateGeneratorError
castingCauseOverflowError
cannotChangeDecimalPrecisionError
invalidInputSyntaxForNumericError
cannotCastFromNullTypeError
cannotCastError
cannotParseDecimalError
simpleStringWithNodeIdUnsupportedError
evaluateUnevaluableAggregateUnsupportedError
dataTypeUnsupportedError
dataTypeUnsupportedError
failedExecuteUserDefinedFunctionError
divideByZeroError
invalidArrayIndexError
mapKeyNotExistError
rowFromCSVParserNotExpectedError
{code}

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

  was:
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on a 
few.

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

Summary: Refactor first 20 query execution errors to use error classes  
(was: Refactor a few query execution errors to use error classes)

> Refactor first 20 query execution errors to use error classes
> -
>
> Key: SPARK-36107
> URL: https://issues.apache.org/jira/browse/SPARK-36107
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> the first 20.
> {code}
> columnChangeUnsupportedError
> logicalHintOperatorNotRemovedDuringAnalysisError
> cannotEvaluateExpressionError
> cannotGenerateCodeForExpressionError
> cannotTerminateGeneratorError
> castingCauseOverflowError
> cannotChangeDecimalPrecisionError
> invalidInputSyntaxForNumericError
> cannotCastFromNullTypeError
> cannotCastError
> cannotParseDecimalError
> simpleStringWithNodeIdUnsupportedError
> evaluateUnevaluableAggregateUnsupportedError
> dataTypeUnsupportedError
> dataTypeUnsupportedError
> failedExecuteUserDefinedFunctionError
> divideByZeroError
> invalidArrayIndexError
> mapKeyNotExistError
> rowFromCSVParserNotExpectedError
> {code}
> For more detail, see the parent ticket 
> [SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36291) Refactor second 20 query execution errors to use error classes

2021-07-26 Thread Karen Feng (Jira)
Karen Feng created SPARK-36291:
--

 Summary: Refactor second 20 query execution errors to use error 
classes
 Key: SPARK-36291
 URL: https://issues.apache.org/jira/browse/SPARK-36291
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on 
the first 20.

{code}
columnChangeUnsupportedError
logicalHintOperatorNotRemovedDuringAnalysisError
cannotEvaluateExpressionError
cannotGenerateCodeForExpressionError
cannotTerminateGeneratorError
castingCauseOverflowError
cannotChangeDecimalPrecisionError
invalidInputSyntaxForNumericError
cannotCastFromNullTypeError
cannotCastError
cannotParseDecimalError
simpleStringWithNodeIdUnsupportedError
evaluateUnevaluableAggregateUnsupportedError
dataTypeUnsupportedError
dataTypeUnsupportedError
failedExecuteUserDefinedFunctionError
divideByZeroError
invalidArrayIndexError
mapKeyNotExistError
rowFromCSVParserNotExpectedError
{code}

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36108) Refactor a few query parsing errors to use error classes

2021-07-23 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36108:
---
Description: 
Refactor some exceptions in 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
 to use error classes.

There are currently ~100 exceptions in this file; so this PR only focuses on a 
few.

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

  was:
Add error classes to 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala].

There are currently ~100 exceptions in this file; the work on this should be 
broken up into multiple PRs. Comment to place a lock on this ticket.

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

Summary: Refactor a few query parsing errors to use error classes  
(was: Add error classes to QueryParsingErrors)

> Refactor a few query parsing errors to use error classes
> 
>
> Key: SPARK-36108
> URL: https://issues.apache.org/jira/browse/SPARK-36108
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala]
>  to use error classes.
> There are currently ~100 exceptions in this file; so this PR only focuses on 
> a few.
> For more detail, see the parent ticket 
> [SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36107) Refactor a few query execution errors to use error classes

2021-07-23 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36107:
---
Description: 
Refactor some exceptions in 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
 to use error classes.

There are currently ~350 exceptions in this file; so this PR only focuses on a 
few.

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

  was:
Add error classes to 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala].

There are currently ~350 exceptions in this file; the work on this should be 
broken up into multiple PRs. Comment to place a lock on this ticket.

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

Summary: Refactor a few query execution errors to use error classes  
(was: Add error classes to QueryExecutionErrors)

> Refactor a few query execution errors to use error classes
> --
>
> Key: SPARK-36107
> URL: https://issues.apache.org/jira/browse/SPARK-36107
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Refactor some exceptions in 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala]
>  to use error classes.
> There are currently ~350 exceptions in this file; so this PR only focuses on 
> a few.
> For more detail, see the parent ticket 
> [SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36074) add error class for StructType.findNestedField

2021-07-15 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36074:
---
Parent: SPARK-36094
Issue Type: Sub-task  (was: Improvement)

> add error class for StructType.findNestedField
> --
>
> Key: SPARK-36074
> URL: https://issues.apache.org/jira/browse/SPARK-36074
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Wenchen Fan
>Assignee: Wenchen Fan
>Priority: Major
> Fix For: 3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36094) Group SQL component error messages in Spark error class JSON file

2021-07-14 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36094:
---
Description: 
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first.
As a starting point, we can build off the exception grouping done in 
[SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In total, 
there are ~1000 error messages to group split across three files 
(QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). If you 
work on this ticket, please create a subtask to improve ease of reviewing.

As a guideline, the error classes should be de-duplicated as much as possible 
to improve auditing.
We will improve error message quality as a follow-up.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].

  was:
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first, building off the exception grouping 
done in [SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In 
total, there are ~1000 error messages to group split across three files 
(QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). As a 
result, the work on this has been broken up into three subtasks, each of which 
involve grouping error messages across one of these files. 
This work should be done across multiple PRs per subtask to improve ease of 
reviewing. For each subtask, comment to place a lock and minimize merge 
conflicts down the line.

As a guideline, the error classes should be de-duplicated as much as possible 
to improve auditing.
We will improve error message quality as a follow-up.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].


> Group SQL component error messages in Spark error class JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file (as 
> discussed in the [mailing 
> list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
>  and introduced in 
> [SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
>  In this file, the error messages should be labeled according to a consistent 
> error class and with a SQLSTATE.
> We will start with the SQL component first.
> As a starting point, we can build off the exception grouping done in 
> [SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In total, 
> there are ~1000 error messages to group split across three files 
> (QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). If 
> you work on this ticket, please create a subtask to improve ease of reviewing.
> As a guideline, the error classes should be de-duplicated as much as possible 
> to improve auditing.
> We will improve error message quality as a follow-up.
> Here is an example PR that groups a few error messages in the 
> QueryCompilationErrors class: [PR 
> 33309|https://github.com/apache/spark/pull/33309].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36106) Refactor a few query compilation errors to use error classes

2021-07-14 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36106:
---
Description: 
Refactor some exceptions in 
[QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala]
 to use error classes.

There are currently ~450 exceptions in this file, so this PR only refactors a 
few as an example.

  was:
Add error classes to 
[QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].

There are currently ~450 exceptions in this file; the work on this should be 
broken up into multiple PRs. Comment to place a lock on this ticket.

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

Summary: Refactor a few query compilation errors to use error classes  
(was: Add subset of error classes to QueryCompilationErrors)

> Refactor a few query compilation errors to use error classes
> 
>
> Key: SPARK-36106
> URL: https://issues.apache.org/jira/browse/SPARK-36106
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Assignee: Karen Feng
>Priority: Major
> Fix For: 3.2.0
>
>
> Refactor some exceptions in 
> [QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala]
>  to use error classes.
> There are currently ~450 exceptions in this file, so this PR only refactors a 
> few as an example.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36106) Add subset of error classes to QueryCompilationErrors

2021-07-14 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36106:
---
Summary: Add subset of error classes to QueryCompilationErrors  (was: Add 
error classes to QueryCompilationErrors)

> Add subset of error classes to QueryCompilationErrors
> -
>
> Key: SPARK-36106
> URL: https://issues.apache.org/jira/browse/SPARK-36106
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Assignee: Karen Feng
>Priority: Major
> Fix For: 3.2.0
>
>
> Add error classes to 
> [QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].
> There are currently ~450 exceptions in this file; the work on this should be 
> broken up into multiple PRs. Comment to place a lock on this ticket.
> For more detail, see the parent ticket 
> [SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36108) Add error classes to QueryParsingErrors

2021-07-13 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36108:
---
Description: 
Add error classes to 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala].

There are currently ~100 exceptions in this file; the work on this should be 
broken up into multiple PRs. Comment to place a lock on this ticket.

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

  was:Add error classes to 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala].


> Add error classes to QueryParsingErrors
> ---
>
> Key: SPARK-36108
> URL: https://issues.apache.org/jira/browse/SPARK-36108
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Add error classes to 
> [QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala].
> There are currently ~100 exceptions in this file; the work on this should be 
> broken up into multiple PRs. Comment to place a lock on this ticket.
> For more detail, see the parent ticket 
> [SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36107) Add error classes to QueryExecutionErrors

2021-07-13 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36107:
---
Description: 
Add error classes to 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala].

There are currently ~350 exceptions in this file; the work on this should be 
broken up into multiple PRs. Comment to place a lock on this ticket.

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

  was:Add error classes to 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala].


> Add error classes to QueryExecutionErrors
> -
>
> Key: SPARK-36107
> URL: https://issues.apache.org/jira/browse/SPARK-36107
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Add error classes to 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala].
> There are currently ~350 exceptions in this file; the work on this should be 
> broken up into multiple PRs. Comment to place a lock on this ticket.
> For more detail, see the parent ticket 
> [SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36106) Add error classes to QueryCompilationErrors

2021-07-13 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36106:
---
Description: 
Add error classes to 
[QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].

There are currently ~450 exceptions in this file; the work on this should be 
broken up into multiple PRs. Comment to place a lock on this ticket.

For more detail, see the parent ticket 
[SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].

  was:
Add error classes to 
[QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].

There are currently ~450 exceptions in this file; the work on this should be 
broken up into multiple PRs. Comment to place a lock on this ticket.


> Add error classes to QueryCompilationErrors
> ---
>
> Key: SPARK-36106
> URL: https://issues.apache.org/jira/browse/SPARK-36106
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Add error classes to 
> [QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].
> There are currently ~450 exceptions in this file; the work on this should be 
> broken up into multiple PRs. Comment to place a lock on this ticket.
> For more detail, see the parent ticket 
> [SPARK-36094|https://issues.apache.org/jira/browse/SPARK-36094].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36094) Group SQL component error messages in Spark error class JSON file

2021-07-13 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36094:
---
Description: 
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first, building off the exception grouping 
done in [SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In 
total, there are ~1000 error messages to group split across three files 
(QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). As a 
result, the work on this has been broken up into three subtasks, each of which 
involve grouping error messages across one of these files. 
This work should be done across multiple PRs per subtask to improve ease of 
reviewing. For each subtask, comment to place a lock and minimize merge 
conflicts down the line.

As a guideline, the error classes should be de-duplicated as much as possible 
to improve auditing.
We will improve error message quality as a follow-up.

Here is an example PR that groups a few error messages in the 
QueryCompilationErrors class: [PR 
33309|https://github.com/apache/spark/pull/33309].

  was:
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920).|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).]
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first,


> Group SQL component error messages in Spark error class JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file (as 
> discussed in the [mailing 
> list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
>  and introduced in 
> [SPARK-34920|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).
>  In this file, the error messages should be labeled according to a consistent 
> error class and with a SQLSTATE.
> We will start with the SQL component first, building off the exception 
> grouping done in 
> [SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]. In total, 
> there are ~1000 error messages to group split across three files 
> (QueryCompilationErrors, QueryExecutionErrors, and QueryParsingErrors). As a 
> result, the work on this has been broken up into three subtasks, each of 
> which involve grouping error messages across one of these files. 
> This work should be done across multiple PRs per subtask to improve ease of 
> reviewing. For each subtask, comment to place a lock and minimize merge 
> conflicts down the line.
> As a guideline, the error classes should be de-duplicated as much as possible 
> to improve auditing.
> We will improve error message quality as a follow-up.
> Here is an example PR that groups a few error messages in the 
> QueryCompilationErrors class: [PR 
> 33309|https://github.com/apache/spark/pull/33309].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36094) Group SQL component error messages in Spark error class JSON file

2021-07-13 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36094:
---
Summary: Group SQL component error messages in Spark error class JSON file  
(was: Group error messages in JSON file)

> Group SQL component error messages in Spark error class JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file (as 
> discussed in the [mailing 
> list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
>  and introduced in 
> [SPARK-34920).|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).]
>  In this file, the error messages should be labeled according to a consistent 
> error class and with a SQLSTATE.
> We will start with the SQL component first,



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36094) Group error messages in JSON file

2021-07-13 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36094:
---
Description: 
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920).|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).]
 In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.

We will start with the SQL component first,

  was:
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|[https://github.com/apache/spark/commit/e3bd817d65ef65c68e40a2937aab0ec70a4afb6f#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).]
In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.



 


> Group error messages in JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file (as 
> discussed in the [mailing 
> list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
>  and introduced in 
> [SPARK-34920).|#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).]
>  In this file, the error messages should be labeled according to a consistent 
> error class and with a SQLSTATE.
> We will start with the SQL component first,



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36094) Group error messages in JSON file

2021-07-13 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36094:
---
Description: 
To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file (as discussed in 
the [mailing 
list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
 and introduced in 
[SPARK-34920|[https://github.com/apache/spark/commit/e3bd817d65ef65c68e40a2937aab0ec70a4afb6f#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).]
In this file, the error messages should be labeled according to a consistent 
error class and with a SQLSTATE.



 

  was:To improve auditing, reduce duplication, and improve quality of error 
messages thrown from Spark, we should group them in a single JSON file. In this 
file, the error messages should be labeled according to a consistent error 
class and with a SQLSTATE.


> Group error messages in JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file (as 
> discussed in the [mailing 
> list|http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Add-error-IDs-td31126.html]
>  and introduced in 
> [SPARK-34920|[https://github.com/apache/spark/commit/e3bd817d65ef65c68e40a2937aab0ec70a4afb6f#diff-d41e24da75af19647fadd76ad0b63ecb22b08c0004b07091e4603a30ec0fe013]).]
> In this file, the error messages should be labeled according to a consistent 
> error class and with a SQLSTATE.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36106) Add error classes to QueryCompilationErrors

2021-07-12 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36106:
---
Description: 
Add error classes to 
[QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].

There are currently ~450 exceptions in this file; the work on this should be 
broken up into multiple PRs. Comment to place a lock on this ticket.

  was:Add error classes to 
[QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].


> Add error classes to QueryCompilationErrors
> ---
>
> Key: SPARK-36106
> URL: https://issues.apache.org/jira/browse/SPARK-36106
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Add error classes to 
> [QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].
> There are currently ~450 exceptions in this file; the work on this should be 
> broken up into multiple PRs. Comment to place a lock on this ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36106) Add error classes to QueryCompilationErrors

2021-07-12 Thread Karen Feng (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379463#comment-17379463
 ] 

Karen Feng commented on SPARK-36106:


I am working on the first 10 exceptions as a sample PR.

> Add error classes to QueryCompilationErrors
> ---
>
> Key: SPARK-36106
> URL: https://issues.apache.org/jira/browse/SPARK-36106
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Add error classes to 
> [QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].
> There are currently ~450 exceptions in this file; the work on this should be 
> broken up into multiple PRs. Comment to place a lock on this ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36108) Add error classes to QueryParsingErrors

2021-07-12 Thread Karen Feng (Jira)
Karen Feng created SPARK-36108:
--

 Summary: Add error classes to QueryParsingErrors
 Key: SPARK-36108
 URL: https://issues.apache.org/jira/browse/SPARK-36108
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Add error classes to 
[QueryParsingErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36107) Add error classes to QueryExecutionErrors

2021-07-12 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36107:
---
Description: Add error classes to 
[QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala].
  (was: Add error classes to QueryCompilationErrors.)

> Add error classes to QueryExecutionErrors
> -
>
> Key: SPARK-36107
> URL: https://issues.apache.org/jira/browse/SPARK-36107
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Add error classes to 
> [QueryExecutionErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36106) Add error classes to QueryCompilationErrors

2021-07-12 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36106:
---
Description: Add error classes to 
[QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].
  (was: Add error classes to QueryCompilationErrors.)

> Add error classes to QueryCompilationErrors
> ---
>
> Key: SPARK-36106
> URL: https://issues.apache.org/jira/browse/SPARK-36106
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Add error classes to 
> [QueryCompilationErrors|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36107) Add error classes to QueryExecutionErrors

2021-07-12 Thread Karen Feng (Jira)
Karen Feng created SPARK-36107:
--

 Summary: Add error classes to QueryExecutionErrors
 Key: SPARK-36107
 URL: https://issues.apache.org/jira/browse/SPARK-36107
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Add error classes to QueryCompilationErrors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36106) Add error classes to QueryCompilationErrors

2021-07-12 Thread Karen Feng (Jira)
Karen Feng created SPARK-36106:
--

 Summary: Add error classes to QueryCompilationErrors
 Key: SPARK-36106
 URL: https://issues.apache.org/jira/browse/SPARK-36106
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


Add error classes to QueryCompilationErrors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36094) Group error messages in JSON file

2021-07-12 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36094:
---
Component/s: SQL

> Group error messages in JSON file
> -
>
> Key: SPARK-36094
> URL: https://issues.apache.org/jira/browse/SPARK-36094
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> To improve auditing, reduce duplication, and improve quality of error 
> messages thrown from Spark, we should group them in a single JSON file. In 
> this file, the error messages should be labeled according to a consistent 
> error class and with a SQLSTATE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36094) Group error messages in JSON file

2021-07-12 Thread Karen Feng (Jira)
Karen Feng created SPARK-36094:
--

 Summary: Group error messages in JSON file
 Key: SPARK-36094
 URL: https://issues.apache.org/jira/browse/SPARK-36094
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.2.0
Reporter: Karen Feng


To improve auditing, reduce duplication, and improve quality of error messages 
thrown from Spark, we should group them in a single JSON file. In this file, 
the error messages should be labeled according to a consistent error class and 
with a SQLSTATE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35866) Improve error message quality

2021-07-12 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-35866:
---
Description: 
In the SPIP: Standardize Exception Messages in Spark, there are three major 
improvements proposed:
 # Group error messages in dedicated files: SPARK-33539
 # Establish an error message guideline for developers SPARK-35140
 # Improve error message quality

Based on the guideline, we should start improving the error messages based on 
the guideline.

  was:
In the SPIP: Standardize Exception Messages in Spark, there are three major 
improvements proposed:
 # Group error messages in dedicated files: SPARK-33539
 # Establish an error message guideline for developers SPARK-35140
 # Improve error message quality

Based on the guideline, we can start improving the error messages in the 
dedicated files. To make auditing easy, we should use the 
[SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java]
 framework; then, the error messages can be centralized in a [single JSON 
file|https://github.com/apache/spark/blob/master/core/src/main/resources/error/error-classes.json].


> Improve error message quality
> -
>
> Key: SPARK-35866
> URL: https://issues.apache.org/jira/browse/SPARK-35866
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> In the SPIP: Standardize Exception Messages in Spark, there are three major 
> improvements proposed:
>  # Group error messages in dedicated files: SPARK-33539
>  # Establish an error message guideline for developers SPARK-35140
>  # Improve error message quality
> Based on the guideline, we should start improving the error messages based on 
> the guideline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36079) Null-based filter estimates should always be non-negative

2021-07-09 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-36079:
---
Summary: Null-based filter estimates should always be non-negative  (was: 
Filter estimate should always be non-negative)

> Null-based filter estimates should always be non-negative
> -
>
> Key: SPARK-36079
> URL: https://issues.apache.org/jira/browse/SPARK-36079
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> It's possible for a column's statistics to have a higher `nullCount` than the 
> table's `rowCount`. In this case, the filter estimates come back outside of 
> the reasonable range (between 0 and 1).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-36079) Filter estimate should always be non-negative

2021-07-09 Thread Karen Feng (Jira)
Karen Feng created SPARK-36079:
--

 Summary: Filter estimate should always be non-negative
 Key: SPARK-36079
 URL: https://issues.apache.org/jira/browse/SPARK-36079
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


It's possible for a column's statistics to have a higher `nullCount` than the 
table's `rowCount`. In this case, the filter estimates come back outside of the 
reasonable range (between 0 and 1).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35866) Improve error message quality

2021-07-09 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-35866:
---
Description: 
In the SPIP: Standardize Exception Messages in Spark, there are three major 
improvements proposed:
 # Group error messages in dedicated files: SPARK-33539
 # Establish an error message guideline for developers SPARK-35140
 # Improve error message quality

Based on the guideline, we can start improving the error messages in the 
dedicated files. To make auditing easy, we should use the 
[SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java]
 framework; then, the error messages can be centralized in a [single JSON 
file|https://github.com/apache/spark/blob/master/core/src/main/resources/error/error-classes.json].

  was:
In the SPIP: Standardize Exception Messages in Spark, there are three major 
improvements proposed:
 # Group error messages in dedicated files: 
[SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]
 # Establish an error message guideline for developers 
[SPARK-35140|https://issues.apache.org/jira/browse/SPARK-35140]
 # Improve error message quality

Based on the guideline, we can start improving the error messages in the 
dedicated files.


> Improve error message quality
> -
>
> Key: SPARK-35866
> URL: https://issues.apache.org/jira/browse/SPARK-35866
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> In the SPIP: Standardize Exception Messages in Spark, there are three major 
> improvements proposed:
>  # Group error messages in dedicated files: SPARK-33539
>  # Establish an error message guideline for developers SPARK-35140
>  # Improve error message quality
> Based on the guideline, we can start improving the error messages in the 
> dedicated files. To make auditing easy, we should use the 
> [SparkThrowable|https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/SparkThrowable.java]
>  framework; then, the error messages can be centralized in a [single JSON 
> file|https://github.com/apache/spark/blob/master/core/src/main/resources/error/error-classes.json].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35955) Fix decimal overflow issues for Average

2021-07-01 Thread Karen Feng (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372910#comment-17372910
 ] 

Karen Feng commented on SPARK-35955:


I have changes almost ready locally, will open PR soon.[~dc-heros], what is the 
state of your work?

> Fix decimal overflow issues for Average
> ---
>
> Key: SPARK-35955
> URL: https://issues.apache.org/jira/browse/SPARK-35955
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: Karen Feng
>Priority: Major
>
> Fix decimal overflow issues for decimal average in ANSI mode. Linked to 
> SPARK-32018 and SPARK-28067, which address decimal sum.
> Repro:
>  
> {code:java}
> import org.apache.spark.sql.functions._
> spark.conf.set("spark.sql.ansi.enabled", true)
> val df = Seq(
>  (BigDecimal("1000"), 1),
>  (BigDecimal("1000"), 1),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2)).toDF("decNum", "intNum")
> val df2 = df.withColumnRenamed("decNum", "decNum2").join(df, 
> "intNum").agg(mean("decNum"))
> df2.show(40,false)
> {code}
>  
> Should throw an exception (as sum overflows), but instead returns:
>  
> {code:java}
> +---+
> |avg(decNum)|
> +---+
> |null   |
> +---+{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35958) Refactor SparkError.scala to SparkThrowable.java

2021-06-30 Thread Karen Feng (Jira)
Karen Feng created SPARK-35958:
--

 Summary: Refactor SparkError.scala to SparkThrowable.java
 Key: SPARK-35958
 URL: https://issues.apache.org/jira/browse/SPARK-35958
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.2.0
Reporter: Karen Feng


Error has a special meaning in Java; SparkError should encompass all 
Throwables. It'd be more correct to rename SparkError to SparkThrowable.

In addition, some Throwables come from Java, so to maximize usability, we 
should migrate the base trait from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35958) Refactor SparkError.scala to SparkThrowable.java

2021-06-30 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-35958:
---
Description: 
Following up from SPARK-34920:

Error has a special meaning in Java; SparkError should encompass all 
Throwables. It'd be more correct to rename SparkError to SparkThrowable.

In addition, some Throwables come from Java, so to maximize usability, we 
should migrate the base trait from Scala to Java.

  was:
Error has a special meaning in Java; SparkError should encompass all 
Throwables. It'd be more correct to rename SparkError to SparkThrowable.

In addition, some Throwables come from Java, so to maximize usability, we 
should migrate the base trait from Scala to Java.


> Refactor SparkError.scala to SparkThrowable.java
> 
>
> Key: SPARK-35958
> URL: https://issues.apache.org/jira/browse/SPARK-35958
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.2.0
>Reporter: Karen Feng
>Priority: Major
>
> Following up from SPARK-34920:
> Error has a special meaning in Java; SparkError should encompass all 
> Throwables. It'd be more correct to rename SparkError to SparkThrowable.
> In addition, some Throwables come from Java, so to maximize usability, we 
> should migrate the base trait from Scala to Java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35955) Fix decimal overflow issues for Average

2021-06-30 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-35955:
---
Description: 
Fix decimal overflow issues for decimal average in ANSI mode. Linked to 
SPARK-32018 and SPARK-28067, which address decimal sum.

Repro:

 
{code:java}
import org.apache.spark.sql.functions._
spark.conf.set("spark.sql.ansi.enabled", true)

val df = Seq(
 (BigDecimal("1000"), 1),
 (BigDecimal("1000"), 1),
 (BigDecimal("1000"), 2),
 (BigDecimal("1000"), 2),
 (BigDecimal("1000"), 2),
 (BigDecimal("1000"), 2),
 (BigDecimal("1000"), 2),
 (BigDecimal("1000"), 2),
 (BigDecimal("1000"), 2),
 (BigDecimal("1000"), 2),
 (BigDecimal("1000"), 2),
 (BigDecimal("1000"), 2)).toDF("decNum", "intNum")
val df2 = df.withColumnRenamed("decNum", "decNum2").join(df, 
"intNum").agg(mean("decNum"))
df2.show(40,false)
{code}
 

Should throw an exception (as sum overflows), but instead returns:

 
{code:java}
+---+
|avg(decNum)|
+---+
|null   |
+---+{code}
 

  was:Return null on overflow for decimal average. Linked to SPARK-32018 and 
SPARK-28067, which address decimal sum.


> Fix decimal overflow issues for Average
> ---
>
> Key: SPARK-35955
> URL: https://issues.apache.org/jira/browse/SPARK-35955
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: Karen Feng
>Priority: Major
>
> Fix decimal overflow issues for decimal average in ANSI mode. Linked to 
> SPARK-32018 and SPARK-28067, which address decimal sum.
> Repro:
>  
> {code:java}
> import org.apache.spark.sql.functions._
> spark.conf.set("spark.sql.ansi.enabled", true)
> val df = Seq(
>  (BigDecimal("1000"), 1),
>  (BigDecimal("1000"), 1),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2),
>  (BigDecimal("1000"), 2)).toDF("decNum", "intNum")
> val df2 = df.withColumnRenamed("decNum", "decNum2").join(df, 
> "intNum").agg(mean("decNum"))
> df2.show(40,false)
> {code}
>  
> Should throw an exception (as sum overflows), but instead returns:
>  
> {code:java}
> +---+
> |avg(decNum)|
> +---+
> |null   |
> +---+{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35955) Fix decimal overflow issues for Average

2021-06-30 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-35955:
---
Description: Return null on overflow for decimal average. Linked to 
SPARK-32018 and SPARK-28067, which address decimal sum.  (was: Return null on 
overflow for decimal average.)

> Fix decimal overflow issues for Average
> ---
>
> Key: SPARK-35955
> URL: https://issues.apache.org/jira/browse/SPARK-35955
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: Karen Feng
>Priority: Major
>
> Return null on overflow for decimal average. Linked to SPARK-32018 and 
> SPARK-28067, which address decimal sum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35955) Fix decimal overflow issues for Average

2021-06-30 Thread Karen Feng (Jira)
Karen Feng created SPARK-35955:
--

 Summary: Fix decimal overflow issues for Average
 Key: SPARK-35955
 URL: https://issues.apache.org/jira/browse/SPARK-35955
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.0.0
Reporter: Karen Feng


Return null on overflow for decimal average.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35866) Improve error message quality

2021-06-23 Thread Karen Feng (Jira)
Karen Feng created SPARK-35866:
--

 Summary: Improve error message quality
 Key: SPARK-35866
 URL: https://issues.apache.org/jira/browse/SPARK-35866
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core, SQL
Affects Versions: 3.2.0
Reporter: Karen Feng


In the SPIP: Standardize Exception Messages in Spark, there are three major 
improvements proposed:
 # Group error messages in dedicated files: 
[SPARK-33539|https://issues.apache.org/jira/browse/SPARK-33539]
 # Establish an error message guideline for developers 
[SPARK-35140|https://issues.apache.org/jira/browse/SPARK-35140]
 # Improve error message quality

Based on the guideline, we can start improving the error messages in the 
dedicated files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35636) Do not push down extract value in higher order function that references both sides of a join

2021-06-03 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-35636:
---
Summary: Do not push down extract value in higher order function that 
references both sides of a join  (was: Do not push lambda variables out of 
lambda functions in NestedColumnAliasing)

> Do not push down extract value in higher order function that references both 
> sides of a join
> 
>
> Key: SPARK-35636
> URL: https://issues.apache.org/jira/browse/SPARK-35636
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: Karen Feng
>Priority: Major
>
> Currently, lambda keys can be referenced outside of the lambda function:
> {quote}Project [transform(keys#0, lambdafunction(_extract_v1#0, lambda key#0, 
> false)) AS a#0]
> +- 'Join Cross
> :- Project [kvs#0[lambda key#0].v1 AS _extract_v1#0]
> :  +- LocalRelation , [kvs#0]
> +- LocalRelation , [keys#0]{quote}
> This should be unchanged from the original state:
> {quote}Project [transform(keys#418, lambdafunction(kvs#417[lambda 
> key#420].v1, lambda key#420, false)) AS a#419]
> +- Join Cross
> :- LocalRelation , [kvs#417]
> +- LocalRelation , [keys#418]{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35636) Do not push lambda variables out of lambda functions in NestedColumnAliasing

2021-06-03 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-35636:
---
Description: 
Currently, lambda keys can be referenced outside of the lambda function:

{quote}Project [transform(keys#0, lambdafunction(_extract_v1#0, lambda key#0, 
false)) AS a#0]
+- 'Join Cross
:- Project [kvs#0[lambda key#0].v1 AS _extract_v1#0]
:  +- LocalRelation , [kvs#0]
+- LocalRelation , [keys#0]{quote}

This should be unchanged from the original state:

{quote}Project [transform(keys#418, lambdafunction(kvs#417[lambda key#420].v1, 
lambda key#420, false)) AS a#419]
+- Join Cross
:- LocalRelation , [kvs#417]
+- LocalRelation , [keys#418]{quote}

  was:
Currently, lambda keys can be referenced outside of the lambda function:

Project [transform(keys#0, lambdafunction(_extract_v1#0, lambda key#0, false)) 
AS a#0]
+- 'Join Cross
:- Project [kvs#0[lambda key#0].v1 AS _extract_v1#0]
:  +- LocalRelation , [kvs#0]
+- LocalRelation , [keys#0]

This should be unchanged from the original state:

Project [transform(keys#418, lambdafunction(kvs#417[lambda key#420].v1, lambda 
key#420, false)) AS a#419]
+- Join Cross
:- LocalRelation , [kvs#417]
+- LocalRelation , [keys#418]


> Do not push lambda variables out of lambda functions in NestedColumnAliasing
> 
>
> Key: SPARK-35636
> URL: https://issues.apache.org/jira/browse/SPARK-35636
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: Karen Feng
>Priority: Major
>
> Currently, lambda keys can be referenced outside of the lambda function:
> {quote}Project [transform(keys#0, lambdafunction(_extract_v1#0, lambda key#0, 
> false)) AS a#0]
> +- 'Join Cross
> :- Project [kvs#0[lambda key#0].v1 AS _extract_v1#0]
> :  +- LocalRelation , [kvs#0]
> +- LocalRelation , [keys#0]{quote}
> This should be unchanged from the original state:
> {quote}Project [transform(keys#418, lambdafunction(kvs#417[lambda 
> key#420].v1, lambda key#420, false)) AS a#419]
> +- Join Cross
> :- LocalRelation , [kvs#417]
> +- LocalRelation , [keys#418]{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35636) Do not push lambda variables out of lambda functions in NestedColumnAliasing

2021-06-03 Thread Karen Feng (Jira)
Karen Feng created SPARK-35636:
--

 Summary: Do not push lambda variables out of lambda functions in 
NestedColumnAliasing
 Key: SPARK-35636
 URL: https://issues.apache.org/jira/browse/SPARK-35636
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.1.0
Reporter: Karen Feng


Currently, lambda keys are referenced outside of the lambda function:

Project [transform(keys#0, lambdafunction(_extract_v1#0, lambda key#0, false)) 
AS a#0]
+- 'Join Cross
:- Project [kvs#0[lambda key#0].v1 AS _extract_v1#0]
:  +- LocalRelation , [kvs#0]
+- LocalRelation , [keys#0]

This should be unchanged from the original state:

Project [transform(keys#418, lambdafunction(kvs#417[lambda key#420].v1, lambda 
key#420, false)) AS a#419]
+- Join Cross
:- LocalRelation , [kvs#417]
+- LocalRelation , [keys#418]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-35636) Do not push lambda variables out of lambda functions in NestedColumnAliasing

2021-06-03 Thread Karen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-35636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Feng updated SPARK-35636:
---
Description: 
Currently, lambda keys can be referenced outside of the lambda function:

Project [transform(keys#0, lambdafunction(_extract_v1#0, lambda key#0, false)) 
AS a#0]
+- 'Join Cross
:- Project [kvs#0[lambda key#0].v1 AS _extract_v1#0]
:  +- LocalRelation , [kvs#0]
+- LocalRelation , [keys#0]

This should be unchanged from the original state:

Project [transform(keys#418, lambdafunction(kvs#417[lambda key#420].v1, lambda 
key#420, false)) AS a#419]
+- Join Cross
:- LocalRelation , [kvs#417]
+- LocalRelation , [keys#418]

  was:
Currently, lambda keys are referenced outside of the lambda function:

Project [transform(keys#0, lambdafunction(_extract_v1#0, lambda key#0, false)) 
AS a#0]
+- 'Join Cross
:- Project [kvs#0[lambda key#0].v1 AS _extract_v1#0]
:  +- LocalRelation , [kvs#0]
+- LocalRelation , [keys#0]

This should be unchanged from the original state:

Project [transform(keys#418, lambdafunction(kvs#417[lambda key#420].v1, lambda 
key#420, false)) AS a#419]
+- Join Cross
:- LocalRelation , [kvs#417]
+- LocalRelation , [keys#418]


> Do not push lambda variables out of lambda functions in NestedColumnAliasing
> 
>
> Key: SPARK-35636
> URL: https://issues.apache.org/jira/browse/SPARK-35636
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: Karen Feng
>Priority: Major
>
> Currently, lambda keys can be referenced outside of the lambda function:
> Project [transform(keys#0, lambdafunction(_extract_v1#0, lambda key#0, 
> false)) AS a#0]
> +- 'Join Cross
> :- Project [kvs#0[lambda key#0].v1 AS _extract_v1#0]
> :  +- LocalRelation , [kvs#0]
> +- LocalRelation , [keys#0]
> This should be unchanged from the original state:
> Project [transform(keys#418, lambdafunction(kvs#417[lambda key#420].v1, 
> lambda key#420, false)) AS a#419]
> +- Join Cross
> :- LocalRelation , [kvs#417]
> +- LocalRelation , [keys#418]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35194) Improve readability of NestingColumnAliasing

2021-04-22 Thread Karen Feng (Jira)
Karen Feng created SPARK-35194:
--

 Summary: Improve readability of NestingColumnAliasing
 Key: SPARK-35194
 URL: https://issues.apache.org/jira/browse/SPARK-35194
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.1.1
Reporter: Karen Feng


Refactor 
https://github.com/apache/spark/blob/6c587d262748a2b469a0786c244e2e555f5f5a74/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/NestedColumnAliasing.scala#L31
 for readability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-35140) Establish error message guidelines

2021-04-19 Thread Karen Feng (Jira)
Karen Feng created SPARK-35140:
--

 Summary: Establish error message guidelines
 Key: SPARK-35140
 URL: https://issues.apache.org/jira/browse/SPARK-35140
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core, SQL
Affects Versions: 3.1.0
Reporter: Karen Feng


In the SPIP: Standardize Exception Messages in Spark, there are three major 
improvements proposed:

# Group error messages in dedicated files.
# Establish an error message guideline for developers.
# Improve error message quality.

The second step is to establish the error message guideline. This was discussed 
in 
http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Build-error-message-guideline-td31076.html
 and added to the website in https://github.com/apache/spark-website/pull/332. 
To increase visibility, the guidelines should be accessible from the PR 
template.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



  1   2   >