[jira] [Updated] (SPARK-44886) Introduce CLUSTER BY SQL clause to CREATE/REPLACE TABLE
[ https://issues.apache.org/jira/browse/SPARK-44886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-44886: -- Description: This proposes to introduce CLUSTER BY clause to CREATE/REPLACE SQL syntax: {code:java} CREATE TABLE tbl(a int, b string) CLUSTER BY (a, b){code} This doesn't introduce a default implementation for clustering, but it's up to the catalog/datasource implementation to utilize the clustering information (e.g., Delta, Iceberg, etc.). was: This proposes to introduce CLUSTER BY clause to CREATE/REPLACE SQL syntax: {code:java} CREATE TABLE tbl(a int, b string) CLUSTER BY (a, b){code} There will not be an implementation, but it's up to the catalog implementation to utilize the clustering information (e.g., Delta, Iceberg, etc.). Note that specifying CLUSTER BY will throw an exception if the table being created is for v1 source or session catalog (e.g., v2 source w/ session catalog). > Introduce CLUSTER BY SQL clause to CREATE/REPLACE TABLE > --- > > Key: SPARK-44886 > URL: https://issues.apache.org/jira/browse/SPARK-44886 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Terry Kim >Priority: Major > > This proposes to introduce CLUSTER BY clause to CREATE/REPLACE SQL syntax: > {code:java} > CREATE TABLE tbl(a int, b string) CLUSTER BY (a, b){code} > This doesn't introduce a default implementation for clustering, but it's up > to the catalog/datasource implementation to utilize the clustering > information (e.g., Delta, Iceberg, etc.). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-45789) Support DESCRIBE TABLE for clustering columns
Terry Kim created SPARK-45789: - Summary: Support DESCRIBE TABLE for clustering columns Key: SPARK-45789 URL: https://issues.apache.org/jira/browse/SPARK-45789 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 4.0.0 Reporter: Terry Kim -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-45788) Support SHOW CREATE TABLE for clustering columns
Terry Kim created SPARK-45788: - Summary: Support SHOW CREATE TABLE for clustering columns Key: SPARK-45788 URL: https://issues.apache.org/jira/browse/SPARK-45788 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 4.0.0 Reporter: Terry Kim -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-45787) Support Catalog.listColumns() for clustering columns
Terry Kim created SPARK-45787: - Summary: Support Catalog.listColumns() for clustering columns Key: SPARK-45787 URL: https://issues.apache.org/jira/browse/SPARK-45787 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 4.0.0 Reporter: Terry Kim Support Catalog.listColumns() for clustering columns so that it ` org.apache.spark.sql.catalog.Column` contains clustering info (e.g., isCluster). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-44886) Introduce CLUSTER BY SQL clause to CREATE/REPLACE table
[ https://issues.apache.org/jira/browse/SPARK-44886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-44886: -- Summary: Introduce CLUSTER BY SQL clause to CREATE/REPLACE table (was: Introduce CLUSTER BY clause to CREATE/REPLACE table) > Introduce CLUSTER BY SQL clause to CREATE/REPLACE table > --- > > Key: SPARK-44886 > URL: https://issues.apache.org/jira/browse/SPARK-44886 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Terry Kim >Priority: Major > > This proposes to introduce CLUSTER BY clause to CREATE/REPLACE SQL syntax: > {code:java} > CREATE TABLE tbl(a int, b string) CLUSTER BY (a, b){code} > There will not be an implementation, but it's up to the catalog > implementation to utilize the clustering information (e.g., Delta, Iceberg, > etc.). > Note that specifying CLUSTER BY will throw an exception if the table being > created is for v1 source or session catalog (e.g., v2 source w/ session > catalog). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-45784) Introduce clustering mechanism to Spark
[ https://issues.apache.org/jira/browse/SPARK-45784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-45784: -- Description: This proposes to introduce a clustering mechanism such that different data sources (e.g., Delta, Iceberg, etc.) can implement format specific clustering. (was: This proposes to introduce CLUSTER BY clause to CREATE/REPLACE SQL syntax: {code:java} CREATE TABLE tbl(a int, b string) CLUSTER BY (a, b){code} There will not be an implementation, but it's up to the catalog implementation to utilize the clustering information (e.g., Delta, Iceberg, etc.). Note that specifying CLUSTER BY will throw an exception if the table being created is for v1 source or session catalog (e.g., v2 source w/ session catalog).) > Introduce clustering mechanism to Spark > --- > > Key: SPARK-45784 > URL: https://issues.apache.org/jira/browse/SPARK-45784 > Project: Spark > Issue Type: New Feature > Components: SQL >Affects Versions: 4.0.0 >Reporter: Terry Kim >Priority: Major > > This proposes to introduce a clustering mechanism such that different data > sources (e.g., Delta, Iceberg, etc.) can implement format specific clustering. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-44886) Introduce CLUSTER BY SQL clause to CREATE/REPLACE TABLE
[ https://issues.apache.org/jira/browse/SPARK-44886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-44886: -- Summary: Introduce CLUSTER BY SQL clause to CREATE/REPLACE TABLE (was: Introduce CLUSTER BY SQL clause to CREATE/REPLACE table) > Introduce CLUSTER BY SQL clause to CREATE/REPLACE TABLE > --- > > Key: SPARK-44886 > URL: https://issues.apache.org/jira/browse/SPARK-44886 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Terry Kim >Priority: Major > > This proposes to introduce CLUSTER BY clause to CREATE/REPLACE SQL syntax: > {code:java} > CREATE TABLE tbl(a int, b string) CLUSTER BY (a, b){code} > There will not be an implementation, but it's up to the catalog > implementation to utilize the clustering information (e.g., Delta, Iceberg, > etc.). > Note that specifying CLUSTER BY will throw an exception if the table being > created is for v1 source or session catalog (e.g., v2 source w/ session > catalog). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-44886) Introduce CLUSTER BY clause to CREATE/REPLACE table
[ https://issues.apache.org/jira/browse/SPARK-44886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-44886: -- Parent: SPARK-45784 Issue Type: Sub-task (was: New Feature) > Introduce CLUSTER BY clause to CREATE/REPLACE table > --- > > Key: SPARK-44886 > URL: https://issues.apache.org/jira/browse/SPARK-44886 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Terry Kim >Priority: Major > > This proposes to introduce CLUSTER BY clause to CREATE/REPLACE SQL syntax: > {code:java} > CREATE TABLE tbl(a int, b string) CLUSTER BY (a, b){code} > There will not be an implementation, but it's up to the catalog > implementation to utilize the clustering information (e.g., Delta, Iceberg, > etc.). > Note that specifying CLUSTER BY will throw an exception if the table being > created is for v1 source or session catalog (e.g., v2 source w/ session > catalog). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-45784) Introduce clustering mechanism to Spark
Terry Kim created SPARK-45784: - Summary: Introduce clustering mechanism to Spark Key: SPARK-45784 URL: https://issues.apache.org/jira/browse/SPARK-45784 Project: Spark Issue Type: New Feature Components: SQL Affects Versions: 4.0.0 Reporter: Terry Kim This proposes to introduce CLUSTER BY clause to CREATE/REPLACE SQL syntax: {code:java} CREATE TABLE tbl(a int, b string) CLUSTER BY (a, b){code} There will not be an implementation, but it's up to the catalog implementation to utilize the clustering information (e.g., Delta, Iceberg, etc.). Note that specifying CLUSTER BY will throw an exception if the table being created is for v1 source or session catalog (e.g., v2 source w/ session catalog). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-44886) Introduce CLUSTER BY clause to CREATE/REPLACE table
[ https://issues.apache.org/jira/browse/SPARK-44886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-44886: -- Description: This proposes to introduce CLUSTER BY clause to CREATE/REPLACE SQL syntax: {code:java} CREATE TABLE tbl(a int, b string) CLUSTER BY (a, b){code} There will not be an implementation, but it's up to the catalog implementation to utilize the clustering information (e.g., Delta, Iceberg, etc.). Note that specifying CLUSTER BY will throw an exception if the table being created is for v1 source or session catalog (e.g., v2 source w/ session catalog). was:This proposes to introduce `CLUSTER BY` clause to CREATE/REPLACE SQL syntax. > Introduce CLUSTER BY clause to CREATE/REPLACE table > --- > > Key: SPARK-44886 > URL: https://issues.apache.org/jira/browse/SPARK-44886 > Project: Spark > Issue Type: New Feature > Components: SQL >Affects Versions: 4.0.0 >Reporter: Terry Kim >Priority: Major > > This proposes to introduce CLUSTER BY clause to CREATE/REPLACE SQL syntax: > {code:java} > CREATE TABLE tbl(a int, b string) CLUSTER BY (a, b){code} > There will not be an implementation, but it's up to the catalog > implementation to utilize the clustering information (e.g., Delta, Iceberg, > etc.). > Note that specifying CLUSTER BY will throw an exception if the table being > created is for v1 source or session catalog (e.g., v2 source w/ session > catalog). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-44886) Introduce CLUSTER BY clause to CREATE/REPLACE table
Terry Kim created SPARK-44886: - Summary: Introduce CLUSTER BY clause to CREATE/REPLACE table Key: SPARK-44886 URL: https://issues.apache.org/jira/browse/SPARK-44886 Project: Spark Issue Type: New Feature Components: SQL Affects Versions: 4.0.0 Reporter: Terry Kim This proposes to introduce `CLUSTER BY` clause to CREATE/REPLACE SQL syntax. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-43346) Assign a name to the error class _LEGACY_ERROR_TEMP_1206
[ https://issues.apache.org/jira/browse/SPARK-43346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-43346: -- Description: Choose a proper name for the error class *_LEGACY_ERROR_TEMP_1206* defined in {*}core/src/main/resources/error/error-classes.json{*}. The name should be short but complete (look at the example in error-classes.json). Add a test which triggers the error from user code if such test still doesn't exist. Check exception fields by using {*}checkError(){*}. The last function checks valuable error fields only, and avoids dependencies from error text message. In this way, tech editors can modify error format in error-classes.json, and don't worry of Spark's internal tests. Migrate other tests that might trigger the error onto checkError(). If you cannot reproduce the error from user space (using SQL query), replace the error by an internal error, see {*}SparkException.internalError(){*}. Improve the error message format in error-classes.json if the current is not clear. Propose a solution to users how to avoid and fix such kind of errors. Please, look at the PR below as examples: * [https://github.com/apache/spark/pull/38685] * [https://github.com/apache/spark/pull/38656] * [https://github.com/apache/spark/pull/38490] was: Choose a proper name for the error class *_LEGACY_ERROR_TEMP_0041* defined in {*}core/src/main/resources/error/error-classes.json{*}. The name should be short but complete (look at the example in error-classes.json). Add a test which triggers the error from user code if such test still doesn't exist. Check exception fields by using {*}checkError(){*}. The last function checks valuable error fields only, and avoids dependencies from error text message. In this way, tech editors can modify error format in error-classes.json, and don't worry of Spark's internal tests. Migrate other tests that might trigger the error onto checkError(). If you cannot reproduce the error from user space (using SQL query), replace the error by an internal error, see {*}SparkException.internalError(){*}. Improve the error message format in error-classes.json if the current is not clear. Propose a solution to users how to avoid and fix such kind of errors. Please, look at the PR below as examples: * [https://github.com/apache/spark/pull/38685] * [https://github.com/apache/spark/pull/38656] * [https://github.com/apache/spark/pull/38490] > Assign a name to the error class _LEGACY_ERROR_TEMP_1206 > > > Key: SPARK-43346 > URL: https://issues.apache.org/jira/browse/SPARK-43346 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.5.0 >Reporter: Terry Kim >Priority: Minor > Labels: starter > > Choose a proper name for the error class *_LEGACY_ERROR_TEMP_1206* defined in > {*}core/src/main/resources/error/error-classes.json{*}. The name should be > short but complete (look at the example in error-classes.json). > Add a test which triggers the error from user code if such test still doesn't > exist. Check exception fields by using {*}checkError(){*}. The last function > checks valuable error fields only, and avoids dependencies from error text > message. In this way, tech editors can modify error format in > error-classes.json, and don't worry of Spark's internal tests. Migrate other > tests that might trigger the error onto checkError(). > If you cannot reproduce the error from user space (using SQL query), replace > the error by an internal error, see {*}SparkException.internalError(){*}. > Improve the error message format in error-classes.json if the current is not > clear. Propose a solution to users how to avoid and fix such kind of errors. > Please, look at the PR below as examples: > * [https://github.com/apache/spark/pull/38685] > * [https://github.com/apache/spark/pull/38656] > * [https://github.com/apache/spark/pull/38490] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-43346) Assign a name to the error class _LEGACY_ERROR_TEMP_1206
Terry Kim created SPARK-43346: - Summary: Assign a name to the error class _LEGACY_ERROR_TEMP_1206 Key: SPARK-43346 URL: https://issues.apache.org/jira/browse/SPARK-43346 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.5.0 Reporter: Terry Kim Choose a proper name for the error class *_LEGACY_ERROR_TEMP_0041* defined in {*}core/src/main/resources/error/error-classes.json{*}. The name should be short but complete (look at the example in error-classes.json). Add a test which triggers the error from user code if such test still doesn't exist. Check exception fields by using {*}checkError(){*}. The last function checks valuable error fields only, and avoids dependencies from error text message. In this way, tech editors can modify error format in error-classes.json, and don't worry of Spark's internal tests. Migrate other tests that might trigger the error onto checkError(). If you cannot reproduce the error from user space (using SQL query), replace the error by an internal error, see {*}SparkException.internalError(){*}. Improve the error message format in error-classes.json if the current is not clear. Propose a solution to users how to avoid and fix such kind of errors. Please, look at the PR below as examples: * [https://github.com/apache/spark/pull/38685] * [https://github.com/apache/spark/pull/38656] * [https://github.com/apache/spark/pull/38490] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-43345) Assign a name to the error class _LEGACY_ERROR_TEMP_0041
[ https://issues.apache.org/jira/browse/SPARK-43345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-43345: -- Description: Choose a proper name for the error class *_LEGACY_ERROR_TEMP_0041* defined in {*}core/src/main/resources/error/error-classes.json{*}. The name should be short but complete (look at the example in error-classes.json). Add a test which triggers the error from user code if such test still doesn't exist. Check exception fields by using {*}checkError(){*}. The last function checks valuable error fields only, and avoids dependencies from error text message. In this way, tech editors can modify error format in error-classes.json, and don't worry of Spark's internal tests. Migrate other tests that might trigger the error onto checkError(). If you cannot reproduce the error from user space (using SQL query), replace the error by an internal error, see {*}SparkException.internalError(){*}. Improve the error message format in error-classes.json if the current is not clear. Propose a solution to users how to avoid and fix such kind of errors. Please, look at the PR below as examples: * [https://github.com/apache/spark/pull/38685] * [https://github.com/apache/spark/pull/38656] * [https://github.com/apache/spark/pull/38490] was: Choose a proper name for the error class *_LEGACY_ERROR_TEMP_2024* defined in {*}core/src/main/resources/error/error-classes.json{*}. The name should be short but complete (look at the example in error-classes.json). Add a test which triggers the error from user code if such test still doesn't exist. Check exception fields by using {*}checkError(){*}. The last function checks valuable error fields only, and avoids dependencies from error text message. In this way, tech editors can modify error format in error-classes.json, and don't worry of Spark's internal tests. Migrate other tests that might trigger the error onto checkError(). If you cannot reproduce the error from user space (using SQL query), replace the error by an internal error, see {*}SparkException.internalError(){*}. Improve the error message format in error-classes.json if the current is not clear. Propose a solution to users how to avoid and fix such kind of errors. Please, look at the PR below as examples: * [https://github.com/apache/spark/pull/38685] * [https://github.com/apache/spark/pull/38656] * [https://github.com/apache/spark/pull/38490] > Assign a name to the error class _LEGACY_ERROR_TEMP_0041 > > > Key: SPARK-43345 > URL: https://issues.apache.org/jira/browse/SPARK-43345 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.5.0 >Reporter: Terry Kim >Priority: Minor > Labels: starter > > Choose a proper name for the error class *_LEGACY_ERROR_TEMP_0041* defined in > {*}core/src/main/resources/error/error-classes.json{*}. The name should be > short but complete (look at the example in error-classes.json). > Add a test which triggers the error from user code if such test still doesn't > exist. Check exception fields by using {*}checkError(){*}. The last function > checks valuable error fields only, and avoids dependencies from error text > message. In this way, tech editors can modify error format in > error-classes.json, and don't worry of Spark's internal tests. Migrate other > tests that might trigger the error onto checkError(). > If you cannot reproduce the error from user space (using SQL query), replace > the error by an internal error, see {*}SparkException.internalError(){*}. > Improve the error message format in error-classes.json if the current is not > clear. Propose a solution to users how to avoid and fix such kind of errors. > Please, look at the PR below as examples: > * [https://github.com/apache/spark/pull/38685] > * [https://github.com/apache/spark/pull/38656] > * [https://github.com/apache/spark/pull/38490] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-43345) Assign a name to the error class _LEGACY_ERROR_TEMP_0041
Terry Kim created SPARK-43345: - Summary: Assign a name to the error class _LEGACY_ERROR_TEMP_0041 Key: SPARK-43345 URL: https://issues.apache.org/jira/browse/SPARK-43345 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.5.0 Reporter: Terry Kim Choose a proper name for the error class *_LEGACY_ERROR_TEMP_2024* defined in {*}core/src/main/resources/error/error-classes.json{*}. The name should be short but complete (look at the example in error-classes.json). Add a test which triggers the error from user code if such test still doesn't exist. Check exception fields by using {*}checkError(){*}. The last function checks valuable error fields only, and avoids dependencies from error text message. In this way, tech editors can modify error format in error-classes.json, and don't worry of Spark's internal tests. Migrate other tests that might trigger the error onto checkError(). If you cannot reproduce the error from user space (using SQL query), replace the error by an internal error, see {*}SparkException.internalError(){*}. Improve the error message format in error-classes.json if the current is not clear. Propose a solution to users how to avoid and fix such kind of errors. Please, look at the PR below as examples: * [https://github.com/apache/spark/pull/38685] * [https://github.com/apache/spark/pull/38656] * [https://github.com/apache/spark/pull/38490] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-37937) Use error classes in the parsing errors of lateral join
[ https://issues.apache.org/jira/browse/SPARK-37937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17477384#comment-17477384 ] Terry Kim commented on SPARK-37937: --- I will work on this. Thanks [~maxgekk] for organizing this! > Use error classes in the parsing errors of lateral join > --- > > Key: SPARK-37937 > URL: https://issues.apache.org/jira/browse/SPARK-37937 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Max Gekk >Priority: Major > > Migrate the following errors in QueryParsingErrors: > * lateralJoinWithNaturalJoinUnsupportedError > * lateralJoinWithUsingJoinUnsupportedError > * unsupportedLateralJoinTypeError > * invalidLateralJoinRelationError > onto use error classes. Throw an implementation of SparkThrowable. Also write > a test per every error in QueryParsingErrorsSuite. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-37479) Migrate DROP NAMESPACE to use V2 command by default
[ https://issues.apache.org/jira/browse/SPARK-37479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17475542#comment-17475542 ] Terry Kim commented on SPARK-37479: --- OK, thanks! > Migrate DROP NAMESPACE to use V2 command by default > --- > > Key: SPARK-37479 > URL: https://issues.apache.org/jira/browse/SPARK-37479 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: dch nguyen >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-37890) Migrate DESCRIBE TABLE to use V2 command by default
[ https://issues.apache.org/jira/browse/SPARK-37890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17475089#comment-17475089 ] Terry Kim commented on SPARK-37890: --- Working on this. > Migrate DESCRIBE TABLE to use V2 command by default > --- > > Key: SPARK-37890 > URL: https://issues.apache.org/jira/browse/SPARK-37890 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Assignee: Terry Kim >Priority: Major > Fix For: 3.3.0 > > > Migrate DESCRIBE TABLE to use V2 command by default. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-37890) Migrate DESCRIBE TABLE to use V2 command by default
[ https://issues.apache.org/jira/browse/SPARK-37890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-37890: -- Description: Migrate DESCRIBE TABLE to use V2 command by default. (was: Migrate DESCRIBE NAMESPACE to use V2 command by default.) > Migrate DESCRIBE TABLE to use V2 command by default > --- > > Key: SPARK-37890 > URL: https://issues.apache.org/jira/browse/SPARK-37890 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Assignee: Terry Kim >Priority: Major > Fix For: 3.3.0 > > > Migrate DESCRIBE TABLE to use V2 command by default. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37890) Migrate DESCRIBE TABLE to use V2 command by default
Terry Kim created SPARK-37890: - Summary: Migrate DESCRIBE TABLE to use V2 command by default Key: SPARK-37890 URL: https://issues.apache.org/jira/browse/SPARK-37890 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Assignee: Terry Kim Fix For: 3.3.0 Migrate DESCRIBE NAMESPACE to use V2 command by default. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-37888) Unify v1 and v2 DESCRIBE TABLE tests
[ https://issues.apache.org/jira/browse/SPARK-37888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17475087#comment-17475087 ] Terry Kim commented on SPARK-37888: --- Yes, will work on this first. > Unify v1 and v2 DESCRIBE TABLE tests > > > Key: SPARK-37888 > URL: https://issues.apache.org/jira/browse/SPARK-37888 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Wenchen Fan >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-37479) Migrate DROP NAMESPACE to use V2 command by default
[ https://issues.apache.org/jira/browse/SPARK-37479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17475077#comment-17475077 ] Terry Kim commented on SPARK-37479: --- [~dchvn] Are you still working on this? If not, I will take this? Please let me know. Thanks! > Migrate DROP NAMESPACE to use V2 command by default > --- > > Key: SPARK-37479 > URL: https://issues.apache.org/jira/browse/SPARK-37479 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: dch nguyen >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-37804) Unify v1 and v2 CREATE NAMESPACE tests
[ https://issues.apache.org/jira/browse/SPARK-37804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-37804: -- Description: Extract CREATE NAMESPACE tests to the common place to run them for V1 and v2 datasources. Some tests can be places to V1 and V2 specific test suites. (was: Extract ALTER TABLE .. SET LOCATION tests to the common place to run them for V1 and v2 datasources. Some tests can be places to V1 and V2 specific test suites.) > Unify v1 and v2 CREATE NAMESPACE tests > -- > > Key: SPARK-37804 > URL: https://issues.apache.org/jira/browse/SPARK-37804 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Assignee: Terry Kim >Priority: Major > Fix For: 3.3.0 > > > Extract CREATE NAMESPACE tests to the common place to run them for V1 and v2 > datasources. Some tests can be places to V1 and V2 specific test suites. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37804) Unify v1 and v2 CREATE NAMESPACE tests
Terry Kim created SPARK-37804: - Summary: Unify v1 and v2 CREATE NAMESPACE tests Key: SPARK-37804 URL: https://issues.apache.org/jira/browse/SPARK-37804 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Assignee: Terry Kim Fix For: 3.3.0 Extract ALTER TABLE .. SET LOCATION tests to the common place to run them for V1 and v2 datasources. Some tests can be places to V1 and V2 specific test suites. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37637) Migrate SHOW TABLES EXTENDED to use v2 command by default
Terry Kim created SPARK-37637: - Summary: Migrate SHOW TABLES EXTENDED to use v2 command by default Key: SPARK-37637 URL: https://issues.apache.org/jira/browse/SPARK-37637 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Migrate SHOW TABLES EXTENDED to use v2 command by default -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-37636) Migrate CREATE NAMESPACE to use v2 command by default
[ https://issues.apache.org/jira/browse/SPARK-37636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17458907#comment-17458907 ] Terry Kim commented on SPARK-37636: --- working on this > Migrate CREATE NAMESPACE to use v2 command by default > - > > Key: SPARK-37636 > URL: https://issues.apache.org/jira/browse/SPARK-37636 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > Migrate CREATE NAMESPACE to use v2 command by default -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-37637) Migrate SHOW TABLES EXTENDED to use v2 command by default
[ https://issues.apache.org/jira/browse/SPARK-37637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17458908#comment-17458908 ] Terry Kim commented on SPARK-37637: --- Working on this. > Migrate SHOW TABLES EXTENDED to use v2 command by default > - > > Key: SPARK-37637 > URL: https://issues.apache.org/jira/browse/SPARK-37637 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > Migrate SHOW TABLES EXTENDED to use v2 command by default -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37636) Migrate CREATE NAMESPACE to use v2 command by default
Terry Kim created SPARK-37636: - Summary: Migrate CREATE NAMESPACE to use v2 command by default Key: SPARK-37636 URL: https://issues.apache.org/jira/browse/SPARK-37636 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Migrate CREATE NAMESPACE to use v2 command by default -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Closed] (SPARK-37606) ALTER NAMESPACE ... SET PROPERTIES v2 command should use existing properties in addition to new properties
[ https://issues.apache.org/jira/browse/SPARK-37606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim closed SPARK-37606. - > ALTER NAMESPACE ... SET PROPERTIES v2 command should use existing properties > in addition to new properties > -- > > Key: SPARK-37606 > URL: https://issues.apache.org/jira/browse/SPARK-37606 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > ALTER NAMESPACE ... SET PROPERTIES v2 command should use existing properties > in addition to new properties to be consistent with v1 behavior. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-37606) ALTER NAMESPACE ... SET PROPERTIES v2 command should use existing properties in addition to new properties
[ https://issues.apache.org/jira/browse/SPARK-37606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim resolved SPARK-37606. --- Resolution: Won't Fix > ALTER NAMESPACE ... SET PROPERTIES v2 command should use existing properties > in addition to new properties > -- > > Key: SPARK-37606 > URL: https://issues.apache.org/jira/browse/SPARK-37606 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > ALTER NAMESPACE ... SET PROPERTIES v2 command should use existing properties > in addition to new properties to be consistent with v1 behavior. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37606) ALTER NAMESPACE ... SET PROPERTIES v2 command should use existing properties in addition to new properties
Terry Kim created SPARK-37606: - Summary: ALTER NAMESPACE ... SET PROPERTIES v2 command should use existing properties in addition to new properties Key: SPARK-37606 URL: https://issues.apache.org/jira/browse/SPARK-37606 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim ALTER NAMESPACE ... SET PROPERTIES v2 command should use existing properties in addition to new properties to be consistent with v1 behavior. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37599) Unify v1 and v2 ALTER TABLE .. SET LOCATION tests
Terry Kim created SPARK-37599: - Summary: Unify v1 and v2 ALTER TABLE .. SET LOCATION tests Key: SPARK-37599 URL: https://issues.apache.org/jira/browse/SPARK-37599 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Assignee: Terry Kim Fix For: 3.3.0 Extract ALTER NAMESPACE .. SET LOCATION tests to the common place to run them for V1 and v2 datasources. Some tests can be places to V1 and V2 specific test suites. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-37599) Unify v1 and v2 ALTER TABLE .. SET LOCATION tests
[ https://issues.apache.org/jira/browse/SPARK-37599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-37599: -- Description: Extract ALTER TABLE .. SET LOCATION tests to the common place to run them for V1 and v2 datasources. Some tests can be places to V1 and V2 specific test suites. (was: Extract ALTER NAMESPACE .. SET LOCATION tests to the common place to run them for V1 and v2 datasources. Some tests can be places to V1 and V2 specific test suites.) > Unify v1 and v2 ALTER TABLE .. SET LOCATION tests > - > > Key: SPARK-37599 > URL: https://issues.apache.org/jira/browse/SPARK-37599 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Assignee: Terry Kim >Priority: Major > Fix For: 3.3.0 > > > Extract ALTER TABLE .. SET LOCATION tests to the common place to run them for > V1 and v2 datasources. Some tests can be places to V1 and V2 specific test > suites. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34332) Unify v1 and v2 ALTER NAMESPACE .. SET LOCATION tests
[ https://issues.apache.org/jira/browse/SPARK-34332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34332: -- Summary: Unify v1 and v2 ALTER NAMESPACE .. SET LOCATION tests (was: Unify v1 and v2 ALTER TABLE .. SET LOCATION tests) > Unify v1 and v2 ALTER NAMESPACE .. SET LOCATION tests > - > > Key: SPARK-34332 > URL: https://issues.apache.org/jira/browse/SPARK-34332 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Max Gekk >Assignee: Terry Kim >Priority: Major > Fix For: 3.3.0 > > > Extract ALTER TABLE .. SET LOCATION tests to the common place to run them for > V1 and v2 datasources. Some tests can be places to V1 and V2 specific test > suites. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34332) Unify v1 and v2 ALTER NAMESPACE .. SET LOCATION tests
[ https://issues.apache.org/jira/browse/SPARK-34332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34332: -- Description: Extract ALTER NAMESPACE .. SET LOCATION tests to the common place to run them for V1 and v2 datasources. Some tests can be places to V1 and V2 specific test suites. (was: Extract ALTER TABLE .. SET LOCATION tests to the common place to run them for V1 and v2 datasources. Some tests can be places to V1 and V2 specific test suites.) > Unify v1 and v2 ALTER NAMESPACE .. SET LOCATION tests > - > > Key: SPARK-34332 > URL: https://issues.apache.org/jira/browse/SPARK-34332 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Max Gekk >Assignee: Terry Kim >Priority: Major > Fix For: 3.3.0 > > > Extract ALTER NAMESPACE .. SET LOCATION tests to the common place to run them > for V1 and v2 datasources. Some tests can be places to V1 and V2 specific > test suites. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37590) Unify v1 and v2 ALTER NAMESPACE ... SET PROPERTIES tests
Terry Kim created SPARK-37590: - Summary: Unify v1 and v2 ALTER NAMESPACE ... SET PROPERTIES tests Key: SPARK-37590 URL: https://issues.apache.org/jira/browse/SPARK-37590 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Unify v1 and v2 ALTER NAMESPACE ... SET PROPERTIES tests -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-37545) V2 CreateTableAsSelect command should qualify location
[ https://issues.apache.org/jira/browse/SPARK-37545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-37545: -- Description: V2 CreateTableAsSelect command should qualify location. Currently, {code:java} spark.sql("CREATE TABLE testcat.t USING foo LOCATION '/tmp/foo' AS SELECT id FROM source") spark.sql("DESCRIBE EXTENDED testcat.t").show(false) {code} displays the location as `/tmp/foo` whereas V1 command displays/stores it as qualified (`[file:/tmp/foo|file:///tmp/foo]`). was: V2 CreateTableAsSelect command should qualify location. Currently, {code:java} spark.sql("CREATE TABLE testcat.t USING foo LOCATION '/tmp/foo' AS SELECT id FROM source") spark.sql("DESCRIBE EXTENDED $identifier").show(false) {code} displays the location as `/tmp/foo` whereas V1 command displays/stores it as qualified (`file:/tmp/foo`). > V2 CreateTableAsSelect command should qualify location > -- > > Key: SPARK-37545 > URL: https://issues.apache.org/jira/browse/SPARK-37545 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > V2 CreateTableAsSelect command should qualify location. Currently, > > {code:java} > spark.sql("CREATE TABLE testcat.t USING foo LOCATION '/tmp/foo' AS SELECT id > FROM source") > spark.sql("DESCRIBE EXTENDED testcat.t").show(false) > {code} > displays the location as `/tmp/foo` whereas V1 command displays/stores it as > qualified (`[file:/tmp/foo|file:///tmp/foo]`). > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37545) V2 CreateTableAsSelect command should qualify location
Terry Kim created SPARK-37545: - Summary: V2 CreateTableAsSelect command should qualify location Key: SPARK-37545 URL: https://issues.apache.org/jira/browse/SPARK-37545 Project: Spark Issue Type: Bug Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim V2 CreateTableAsSelect command should qualify location. Currently, {code:java} spark.sql("CREATE TABLE testcat.t USING foo LOCATION '/tmp/foo' AS SELECT id FROM source") spark.sql("DESCRIBE EXTENDED $identifier").show(false) {code} displays the location as `/tmp/foo` whereas V1 command displays/stores it as qualified (`file:/tmp/foo`). -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37456) CREATE NAMESPACE should qualify location for v2 command
Terry Kim created SPARK-37456: - Summary: CREATE NAMESPACE should qualify location for v2 command Key: SPARK-37456 URL: https://issues.apache.org/jira/browse/SPARK-37456 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim CREATE NAMESPACE should qualify location for v2 command -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-37444) ALTER NAMESPACE ... SET LOCATION should handle empty location consistently across v1 and v2 command
[ https://issues.apache.org/jira/browse/SPARK-37444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-37444: -- Description: ALTER NAMESPACE ... SET LOCATION should handle empty location consistently across v1 and v2 command (was: ALTER NAMESPACE ... SET LOCATION handles empty location consistently across v1 and v2 command) > ALTER NAMESPACE ... SET LOCATION should handle empty location consistently > across v1 and v2 command > --- > > Key: SPARK-37444 > URL: https://issues.apache.org/jira/browse/SPARK-37444 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > ALTER NAMESPACE ... SET LOCATION should handle empty location consistently > across v1 and v2 command -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-37444) ALTER NAMESPACE ... SET LOCATION should handle empty location consistently across v1 and v2 command
[ https://issues.apache.org/jira/browse/SPARK-37444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-37444: -- Summary: ALTER NAMESPACE ... SET LOCATION should handle empty location consistently across v1 and v2 command (was: ALTER NAMESPACE ... SET LOCATION handles empty location consistently across v1 and v2 command) > ALTER NAMESPACE ... SET LOCATION should handle empty location consistently > across v1 and v2 command > --- > > Key: SPARK-37444 > URL: https://issues.apache.org/jira/browse/SPARK-37444 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > ALTER NAMESPACE ... SET LOCATION handles empty location consistently across > v1 and v2 command -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37444) ALTER NAMESPACE ... SET LOCATION handles empty location consistently across v1 and v2 command
Terry Kim created SPARK-37444: - Summary: ALTER NAMESPACE ... SET LOCATION handles empty location consistently across v1 and v2 command Key: SPARK-37444 URL: https://issues.apache.org/jira/browse/SPARK-37444 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim ALTER NAMESPACE ... SET LOCATION handles empty location consistently across v1 and v2 command -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34332) Unify v1 and v2 ALTER TABLE .. SET LOCATION tests
[ https://issues.apache.org/jira/browse/SPARK-34332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442935#comment-17442935 ] Terry Kim commented on SPARK-34332: --- [~maxgekk] I will work on this. > Unify v1 and v2 ALTER TABLE .. SET LOCATION tests > - > > Key: SPARK-34332 > URL: https://issues.apache.org/jira/browse/SPARK-34332 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Max Gekk >Assignee: Max Gekk >Priority: Major > Fix For: 3.3.0 > > > Extract ALTER TABLE .. SET LOCATION tests to the common place to run them for > V1 and v2 datasources. Some tests can be places to V1 and V2 specific test > suites. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-37310) Migrate ALTER NAMESPACE ... SET PROPERTIES to use v2 command by default
[ https://issues.apache.org/jira/browse/SPARK-37310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442925#comment-17442925 ] Terry Kim commented on SPARK-37310: --- working on this. > Migrate ALTER NAMESPACE ... SET PROPERTIES to use v2 command by default > --- > > Key: SPARK-37310 > URL: https://issues.apache.org/jira/browse/SPARK-37310 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > Migrate ALTER NAMESPACE ... SET PROPERTIES to use v2 command by default -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-37311) Migrate ALTER NAMESPACE ... SET LOCATION to use v2 command by default
[ https://issues.apache.org/jira/browse/SPARK-37311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442926#comment-17442926 ] Terry Kim commented on SPARK-37311: --- working on this. > Migrate ALTER NAMESPACE ... SET LOCATION to use v2 command by default > - > > Key: SPARK-37311 > URL: https://issues.apache.org/jira/browse/SPARK-37311 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > Migrate ALTER NAMESPACE ... SET LOCATION to use v2 command by default -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37311) Migrate ALTER NAMESPACE ... SET LOCATION to use v2 command by default
Terry Kim created SPARK-37311: - Summary: Migrate ALTER NAMESPACE ... SET LOCATION to use v2 command by default Key: SPARK-37311 URL: https://issues.apache.org/jira/browse/SPARK-37311 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Migrate ALTER NAMESPACE ... SET LOCATION to use v2 command by default -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37310) Migrate ALTER NAMESPACE ... SET PROPERTIES to use v2 command by default
Terry Kim created SPARK-37310: - Summary: Migrate ALTER NAMESPACE ... SET PROPERTIES to use v2 command by default Key: SPARK-37310 URL: https://issues.apache.org/jira/browse/SPARK-37310 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Migrate ALTER NAMESPACE ... SET PROPERTIES to use v2 command by default -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-37192) Migrate SHOW TBLPROPERTIES to use V2 command by default
[ https://issues.apache.org/jira/browse/SPARK-37192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17437120#comment-17437120 ] Terry Kim commented on SPARK-37192: --- Yes, go for it! Thanks! > Migrate SHOW TBLPROPERTIES to use V2 command by default > --- > > Key: SPARK-37192 > URL: https://issues.apache.org/jira/browse/SPARK-37192 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: PengLei >Priority: Major > Fix For: 3.3.0 > > > Migrate SHOW TBLPROPERTIES to use V2 command by default -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37150) Migrate DESCRIBE NAMESPACE to use V2 command by default
Terry Kim created SPARK-37150: - Summary: Migrate DESCRIBE NAMESPACE to use V2 command by default Key: SPARK-37150 URL: https://issues.apache.org/jira/browse/SPARK-37150 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Migrate DESCRIBE NAMESPACE to use V2 command by default. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-37031) Unify v1 and v2 DESCRIBE NAMESPACE tests
Terry Kim created SPARK-37031: - Summary: Unify v1 and v2 DESCRIBE NAMESPACE tests Key: SPARK-37031 URL: https://issues.apache.org/jira/browse/SPARK-37031 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Extract DESCRIBE NAMESPACE tests to the common place to run them for V1 and v2 datasources. Some tests can be places to V1 and V2 specific test suites. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-36982) Migrate SHOW NAMESPACES to use V2 command by default
Terry Kim created SPARK-36982: - Summary: Migrate SHOW NAMESPACES to use V2 command by default Key: SPARK-36982 URL: https://issues.apache.org/jira/browse/SPARK-36982 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Migrate SHOW NAMESPACES to use V2 command by default. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-36678) Migrate SHOW TABLES to use V2 command by default
[ https://issues.apache.org/jira/browse/SPARK-36678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17421120#comment-17421120 ] Terry Kim commented on SPARK-36678: --- Got it, thanks. > Migrate SHOW TABLES to use V2 command by default > > > Key: SPARK-36678 > URL: https://issues.apache.org/jira/browse/SPARK-36678 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > Migrate SHOW TABLES to use V2 command by default. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Comment Edited] (SPARK-36586) Migrate all ParsedStatement to the new v2 command framework
[ https://issues.apache.org/jira/browse/SPARK-36586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17421106#comment-17421106 ] Terry Kim edited comment on SPARK-36586 at 9/28/21, 1:51 AM: - [~cloud_fan] totally missed this ping (never got an email notification). :) Looks like the work has started and I will also chip in. Thanks! was (Author: imback82): [~cloud_fan] totally missed this ping. :) Looks like the work has started and I will also chip in. Thanks! > Migrate all ParsedStatement to the new v2 command framework > --- > > Key: SPARK-36586 > URL: https://issues.apache.org/jira/browse/SPARK-36586 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.3.0 >Reporter: Wenchen Fan >Priority: Major > > The ParsedStatement needs to be pattern matched in two analyzer rules and > results to a lot of duplicated code. > The new v2 command framework defines a few basic logical plan nodes such as > UnresolvedTable, and we only need to resolve these basic nodes, and pattern > match v2 commands only once in the rule `ResolveSessionCatalog` for v1 > command fallback. > We should migrate all the ParsedStatement to v2 command framework. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-36586) Migrate all ParsedStatement to the new v2 command framework
[ https://issues.apache.org/jira/browse/SPARK-36586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17421106#comment-17421106 ] Terry Kim commented on SPARK-36586: --- [~cloud_fan] totally missed this ping. :) Looks like the work has started and I will also chip in. Thanks! > Migrate all ParsedStatement to the new v2 command framework > --- > > Key: SPARK-36586 > URL: https://issues.apache.org/jira/browse/SPARK-36586 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.3.0 >Reporter: Wenchen Fan >Priority: Major > > The ParsedStatement needs to be pattern matched in two analyzer rules and > results to a lot of duplicated code. > The new v2 command framework defines a few basic logical plan nodes such as > UnresolvedTable, and we only need to resolve these basic nodes, and pattern > match v2 commands only once in the rule `ResolveSessionCatalog` for v1 > command fallback. > We should migrate all the ParsedStatement to v2 command framework. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-36678) Migrate SHOW TABLES to use V2 command by default
[ https://issues.apache.org/jira/browse/SPARK-36678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17410872#comment-17410872 ] Terry Kim commented on SPARK-36678: --- [~cloud_fan], this command may require some catalog API changes to handle temp views correctly. For example, we need to know whether the tables returned by "listTables" are a table or a temp view. However, I believe v2 TableCatalog APIs are not supposed to support temp views. Any suggestion? > Migrate SHOW TABLES to use V2 command by default > > > Key: SPARK-36678 > URL: https://issues.apache.org/jira/browse/SPARK-36678 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Priority: Major > > Migrate SHOW TABLES to use V2 command by default. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-36678) Migrate SHOW TABLES to use V2 command by default
Terry Kim created SPARK-36678: - Summary: Migrate SHOW TABLES to use V2 command by default Key: SPARK-36678 URL: https://issues.apache.org/jira/browse/SPARK-36678 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Migrate SHOW TABLES to use V2 command by default. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-36588) Use v2 commands by default
[ https://issues.apache.org/jira/browse/SPARK-36588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17404569#comment-17404569 ] Terry Kim commented on SPARK-36588: --- Yes. Thanks [~cloud_fan]. > Use v2 commands by default > -- > > Key: SPARK-36588 > URL: https://issues.apache.org/jira/browse/SPARK-36588 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.3.0 >Reporter: Wenchen Fan >Priority: Major > > It's been a while after we introduce the v2 commands, and I think it's time > to use v2 commands by default even for the session catalog, with a legacy > config to fall back to the v1 commands. > We can do this one command by one command, with tests for both the v1 and v2 > versions. The tests should help us understand the behavior difference between > v1 and v2 commands, so that we can: > # fix the v2 commands to match the v1 behavior > # or accept the behavior difference and write migration guide > We can reuse the test framework built in > https://issues.apache.org/jira/browse/SPARK-33381 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-36450) Remove unused UnresolvedV2Relation
Terry Kim created SPARK-36450: - Summary: Remove unused UnresolvedV2Relation Key: SPARK-36450 URL: https://issues.apache.org/jira/browse/SPARK-36450 Project: Spark Issue Type: Improvement Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Now that all the commands that use UnresolvedV2Relation have been migrated to use UnresolvedTable, and UnresolvedView, it can be removed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-36449) ALTER TABLE REPLACE COLUMNS should check duplicates for the specified columns for v2 command
[ https://issues.apache.org/jira/browse/SPARK-36449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-36449: -- Description: ALTER TABLE REPLACE COLUMNS currently doesn't check duplicates for the specified columns for v2 command. (was: ALTER TABLE ADD COLUMNS currently doesn't check duplicates for the specified columns for v2 command.) > ALTER TABLE REPLACE COLUMNS should check duplicates for the specified columns > for v2 command > > > Key: SPARK-36449 > URL: https://issues.apache.org/jira/browse/SPARK-36449 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 3.3.0 >Reporter: Terry Kim >Assignee: Terry Kim >Priority: Major > Fix For: 3.2.0 > > > ALTER TABLE REPLACE COLUMNS currently doesn't check duplicates for the > specified columns for v2 command. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-36449) ALTER TABLE REPLACE COLUMNS should check duplicates for the specified columns for v2 command
Terry Kim created SPARK-36449: - Summary: ALTER TABLE REPLACE COLUMNS should check duplicates for the specified columns for v2 command Key: SPARK-36449 URL: https://issues.apache.org/jira/browse/SPARK-36449 Project: Spark Issue Type: Bug Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim Assignee: Terry Kim Fix For: 3.2.0 ALTER TABLE ADD COLUMNS currently doesn't check duplicates for the specified columns for v2 command. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-36372) ALTER TABLE ADD COLUMNS should check duplicates for the specified columns for v2 command
Terry Kim created SPARK-36372: - Summary: ALTER TABLE ADD COLUMNS should check duplicates for the specified columns for v2 command Key: SPARK-36372 URL: https://issues.apache.org/jira/browse/SPARK-36372 Project: Spark Issue Type: Bug Components: SQL Affects Versions: 3.3.0 Reporter: Terry Kim ALTER TABLE ADD COLUMNS currently doesn't check duplicates for the specified columns for v2 command. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-36006) Migrate ALTER TABLE ADD/RENAME COLUMNS command to the new resolution framework
Terry Kim created SPARK-36006: - Summary: Migrate ALTER TABLE ADD/RENAME COLUMNS command to the new resolution framework Key: SPARK-36006 URL: https://issues.apache.org/jira/browse/SPARK-36006 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0, 3.3.0 Reporter: Terry Kim Migrate ALTER TABLE ADD/RENAME COLUMNS command to the new resolution framework -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-30283) V2 Command logical plan should use UnresolvedV2Relation for a table
[ https://issues.apache.org/jira/browse/SPARK-30283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim resolved SPARK-30283. --- Resolution: Duplicate Already resolved as separate subtasks. > V2 Command logical plan should use UnresolvedV2Relation for a table > --- > > Key: SPARK-30283 > URL: https://issues.apache.org/jira/browse/SPARK-30283 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.0.0 >Reporter: Terry Kim >Priority: Major > > For the following v2 commands, multi-part names are directly passed to the > command without looking up temp views, thus they are always resolved to > tables: > * DROP TABLE > * REFRESH TABLE > * RENAME TABLE > * REPLACE TABLE > They should be updated to have UnresolvedV2Relation such that temp views are > looked up first in Analyzer.ResolveTables. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-30535) Migrate ALTER TABLE commands to the new resolution framework
[ https://issues.apache.org/jira/browse/SPARK-30535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim resolved SPARK-30535. --- Resolution: Duplicate Will be done as a separate subtask for each command. > Migrate ALTER TABLE commands to the new resolution framework > > > Key: SPARK-30535 > URL: https://issues.apache.org/jira/browse/SPARK-30535 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.0.0 >Reporter: Terry Kim >Priority: Major > > Migrate ALTER TABLE commands to the new resolution framework introduced in > SPARK-30214 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-35403) Migrate ALTER TABLE commands that alter columns to new framework
[ https://issues.apache.org/jira/browse/SPARK-35403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim resolved SPARK-35403. --- Resolution: Duplicate Will be done as a separate subtask for each command. > Migrate ALTER TABLE commands that alter columns to new framework > > > Key: SPARK-35403 > URL: https://issues.apache.org/jira/browse/SPARK-35403 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > Migrate the following ALTER TABLE commands: > * ALTER TABLE ... ADD COLUMNS > * ALTER TABLE ... REPLACE COLUMNS > * ALTER TABLE ... ALTER COLUMN > * ALTER TABLE ... RENAME COLUMN > * ALTER TABLE ... DROP COLUMNS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-35883) Migrate ALTER TABLE rename column command to the new resolution framework
Terry Kim created SPARK-35883: - Summary: Migrate ALTER TABLE rename column command to the new resolution framework Key: SPARK-35883 URL: https://issues.apache.org/jira/browse/SPARK-35883 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Migrate ALTER TABLE rename column command to the new resolution framework -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-35403) Migrate ALTER TABLE commands that alter columns to new framework
[ https://issues.apache.org/jira/browse/SPARK-35403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-35403: -- Description: Migrate the following ALTER TABLE commands: * ALTER TABLE ... ADD COLUMNS * ALTER TABLE ... REPLACE COLUMNS * ALTER TABLE ... ALTER COLUMN * ALTER TABLE ... RENAME COLUMN * ALTER TABLE ... DROP COLUMNS > Migrate ALTER TABLE commands that alter columns to new framework > > > Key: SPARK-35403 > URL: https://issues.apache.org/jira/browse/SPARK-35403 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > Migrate the following ALTER TABLE commands: > * ALTER TABLE ... ADD COLUMNS > * ALTER TABLE ... REPLACE COLUMNS > * ALTER TABLE ... ALTER COLUMN > * ALTER TABLE ... RENAME COLUMN > * ALTER TABLE ... DROP COLUMNS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-35403) Migrate ALTER TABLE commands that alter columns to new framework
Terry Kim created SPARK-35403: - Summary: Migrate ALTER TABLE commands that alter columns to new framework Key: SPARK-35403 URL: https://issues.apache.org/jira/browse/SPARK-35403 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-35225) EXPLAIN command should handle empty output of an analyzed plan
Terry Kim created SPARK-35225: - Summary: EXPLAIN command should handle empty output of an analyzed plan Key: SPARK-35225 URL: https://issues.apache.org/jira/browse/SPARK-35225 Project: Spark Issue Type: Improvement Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Currently, EXPLAIN command puts an empty line if there is no output for an analyzed plan. For example, {code:java} sql("CREATE VIEW test AS SELECT 1").explain(true) {code} produces: {code:java} == Parsed Logical Plan == 'CreateViewStatement [test], SELECT 1, false, false, PersistedView +- 'Project [unresolvedalias(1, None)] +- OneRowRelation == Analyzed Logical Plan == CreateViewCommand `default`.`test`, SELECT 1, false, false, PersistedView, true +- Project [1 AS 1#7] +- OneRowRelation == Optimized Logical Plan == CreateViewCommand `default`.`test`, SELECT 1, false, false, PersistedView, true +- Project [1 AS 1#7] +- OneRowRelation == Physical Plan == Execute CreateViewCommand +- CreateViewCommand `default`.`test`, SELECT 1, false, false, PersistedView, true +- Project [1 AS 1#7] +- OneRowRelation {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-34698) Temporary views should be analyzed during the analysis phase for all applicable commands
[ https://issues.apache.org/jira/browse/SPARK-34698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim resolved SPARK-34698. --- Resolution: Fixed > Temporary views should be analyzed during the analysis phase for all > applicable commands > > > Key: SPARK-34698 > URL: https://issues.apache.org/jira/browse/SPARK-34698 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > There are commands such as "CREATE TEMP VIEW", "ALTER VIEW AS", "CREATE TEMP > VIEW USING", etc. where the temporary view is resolved when the command runs. > Instead, the analysis should be moved to the analyzer. > This is an umbrella JIRA to track subtasks to move analysis of temp views to > the analyzer. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-35122) Migrate CACHE/UNCACHE TABLE to use AnalysisOnlyCommand
[ https://issues.apache.org/jira/browse/SPARK-35122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-35122: -- Description: # Now that AnalysisOnlyCommand is introduced, CacheTable/UncacheTable logical plan can extend AnalysisOnlyCommand to clean up the code; currently, there are many places that handle these commands so that the tables in those commands are only analyzed (and not optimized). (was: Now that AnalysisOnlyCommand is introduced, CacheTable/UncacheTable logical plan can extend AnalysisOnlyCommand to clean up the code.) > Migrate CACHE/UNCACHE TABLE to use AnalysisOnlyCommand > -- > > Key: SPARK-35122 > URL: https://issues.apache.org/jira/browse/SPARK-35122 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > # Now that AnalysisOnlyCommand is introduced, CacheTable/UncacheTable logical > plan can extend AnalysisOnlyCommand to clean up the code; currently, there > are many places that handle these commands so that the tables in those > commands are only analyzed (and not optimized). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-35122) Migrate CACHE/UNCACHE TABLE to use AnalysisOnlyCommand
[ https://issues.apache.org/jira/browse/SPARK-35122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-35122: -- Description: Now that AnalysisOnlyCommand is introduced, CacheTable/UncacheTable logical plan can extend AnalysisOnlyCommand to clean up the code; currently, there are many places that handle these commands so that the tables in those commands are only analyzed (and not optimized). (was: # Now that AnalysisOnlyCommand is introduced, CacheTable/UncacheTable logical plan can extend AnalysisOnlyCommand to clean up the code; currently, there are many places that handle these commands so that the tables in those commands are only analyzed (and not optimized).) > Migrate CACHE/UNCACHE TABLE to use AnalysisOnlyCommand > -- > > Key: SPARK-35122 > URL: https://issues.apache.org/jira/browse/SPARK-35122 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > Now that AnalysisOnlyCommand is introduced, CacheTable/UncacheTable logical > plan can extend AnalysisOnlyCommand to clean up the code; currently, there > are many places that handle these commands so that the tables in those > commands are only analyzed (and not optimized). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-35122) Migrate CACHE/UNCACHE TABLE to use AnalysisOnlyCommand
Terry Kim created SPARK-35122: - Summary: Migrate CACHE/UNCACHE TABLE to use AnalysisOnlyCommand Key: SPARK-35122 URL: https://issues.apache.org/jira/browse/SPARK-35122 Project: Spark Issue Type: Improvement Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Now that AnalysisOnlyCommand is introduced, CacheTable/UncacheTable logical plan can extend AnalysisOnlyCommand to clean up the code. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34698) Temporary views should be analyzed during the analysis phase for all applicable commands
[ https://issues.apache.org/jira/browse/SPARK-34698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34698: -- Description: There are commands such as "CREATE TEMP VIEW", "ALTER VIEW AS", "CREATE TEMP VIEW USING", etc. where the temporary view is resolved when the command runs. Instead, the analysis should be moved to the analyzer. This is an umbrella JIRA to track subtasks to move analysis of temp views to the analyzer. was: There are commands such as "CREATE TEMP VIEW", "ALTER VIEW AS", "CREATE TEMP VIEW USING", etc. where the temporary view is resolved when the command runs. Instead, the analysis should be moved to the analyzer. > Temporary views should be analyzed during the analysis phase for all > applicable commands > > > Key: SPARK-34698 > URL: https://issues.apache.org/jira/browse/SPARK-34698 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > There are commands such as "CREATE TEMP VIEW", "ALTER VIEW AS", "CREATE TEMP > VIEW USING", etc. where the temporary view is resolved when the command runs. > Instead, the analysis should be moved to the analyzer. > This is an umbrella JIRA to track subtasks to move analysis of temp views to > the analyzer. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34698) Temporary views should be analyzed during the analysis phase for all applicable commands
[ https://issues.apache.org/jira/browse/SPARK-34698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34698: -- Description: There are commands such as "CREATE TEMP VIEW", "ALTER VIEW AS", "CREATE TEMP VIEW USING", etc. where the temporary view is resolved when the command runs. Instead, the analysis should be moved to the analyzer. was: Currently, the session catalog can store a local/global temporary view in two different ways: an analyzed plan or TemporaryViewRelation storing CatalogTable. This JIRA keeps track of subtasks that make storing temporary views consistent, by always storing TemporaryViewRelation. > Temporary views should be analyzed during the analysis phase for all > applicable commands > > > Key: SPARK-34698 > URL: https://issues.apache.org/jira/browse/SPARK-34698 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > There are commands such as "CREATE TEMP VIEW", "ALTER VIEW AS", "CREATE TEMP > VIEW USING", etc. where the temporary view is resolved when the command runs. > Instead, the analysis should be moved to the analyzer. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34698) Temporary views should be analyzed during the analysis phase for all applicable commands
[ https://issues.apache.org/jira/browse/SPARK-34698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34698: -- Summary: Temporary views should be analyzed during the analysis phase for all applicable commands (was: Use TemporaryViewRelation for storing local/global temporary views for all commands) > Temporary views should be analyzed during the analysis phase for all > applicable commands > > > Key: SPARK-34698 > URL: https://issues.apache.org/jira/browse/SPARK-34698 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > Currently, the session catalog can store a local/global temporary view in two > different ways: an analyzed plan or TemporaryViewRelation storing > CatalogTable. > This JIRA keeps track of subtasks that make storing temporary views > consistent, by always storing TemporaryViewRelation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34701) Remove analyzing temp view again in CreateViewCommand
Terry Kim created SPARK-34701: - Summary: Remove analyzing temp view again in CreateViewCommand Key: SPARK-34701 URL: https://issues.apache.org/jira/browse/SPARK-34701 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Remove analyzing temp view again in CreateViewCommand. This can be done once all the caller passes analyzed plan to CreateViewCommand. Reference: https://github.com/apache/spark/pull/31652/files#r58959 https://github.com/apache/spark/pull/31273/files#r581592786 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34700) SessionCatalog's createTempView/createGlobalTempView should accept TemporaryViewRelation
Terry Kim created SPARK-34700: - Summary: SessionCatalog's createTempView/createGlobalTempView should accept TemporaryViewRelation Key: SPARK-34700 URL: https://issues.apache.org/jira/browse/SPARK-34700 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim SessionCatalog's createTempView/createGlobalTempView currently accept LogicalPlan to store temp views, but once all the commands are migrated to store `TemporaryViewRelation`, it should accept TemporaryViewRelation instead. Once this is done, ViewHelper.needsToUncache can remove the following safely: case p => !p.sameResult(aliasedPlan) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34699) 'CREATE TEMP VIEW USING' should use TemporaryViewRelation to store temp views
Terry Kim created SPARK-34699: - Summary: 'CREATE TEMP VIEW USING' should use TemporaryViewRelation to store temp views Key: SPARK-34699 URL: https://issues.apache.org/jira/browse/SPARK-34699 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim 'CREATE TEMP VIEW USING' should use TemporaryViewRelation to store temp views. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34546) AlterViewAs.query should be analyzed during the analysis phase
[ https://issues.apache.org/jira/browse/SPARK-34546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34546: -- Parent: SPARK-34698 Issue Type: Sub-task (was: Improvement) > AlterViewAs.query should be analyzed during the analysis phase > -- > > Key: SPARK-34546 > URL: https://issues.apache.org/jira/browse/SPARK-34546 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > AlterViewAs.query is currently analyzed in the physical operator, but it > should be analyzed during the analysis phase. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34152) CreateViewStatement.child should be a real child
[ https://issues.apache.org/jira/browse/SPARK-34152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34152: -- Parent: SPARK-34698 Issue Type: Sub-task (was: Improvement) > CreateViewStatement.child should be a real child > > > Key: SPARK-34152 > URL: https://issues.apache.org/jira/browse/SPARK-34152 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Wenchen Fan >Assignee: Apache Spark >Priority: Major > Fix For: 3.2.0 > > > Similar to `CreateTableAsSelectStatement`, the input query of > `CreateViewStatement` should be a child and get analyzed during the analysis > phase. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34698) Use TemporaryViewRelation for storing local/global temporary views for all commands
Terry Kim created SPARK-34698: - Summary: Use TemporaryViewRelation for storing local/global temporary views for all commands Key: SPARK-34698 URL: https://issues.apache.org/jira/browse/SPARK-34698 Project: Spark Issue Type: Improvement Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Currently, the session catalog can store a local/global temporary view in two different ways: an analyzed plan or TemporaryViewRelation storing CatalogTable. This JIRA keeps track of subtasks that make storing temporary views consistent, by always storing TemporaryViewRelation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34380) Support ifExists for ALTER TABLE ... UNSET TBLPROPERTIES
[ https://issues.apache.org/jira/browse/SPARK-34380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17293854#comment-17293854 ] Terry Kim commented on SPARK-34380: --- [~angerszhuuu] This PR was reverted, thus I think the status was reset. > Support ifExists for ALTER TABLE ... UNSET TBLPROPERTIES > > > Key: SPARK-34380 > URL: https://issues.apache.org/jira/browse/SPARK-34380 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Assignee: Terry Kim >Priority: Major > > Support ifExists for ALTER TABLE ... UNSET TBLPROPERTIES -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34546) AlterViewAs.query should be analyzed during the analysis phase
Terry Kim created SPARK-34546: - Summary: AlterViewAs.query should be analyzed during the analysis phase Key: SPARK-34546 URL: https://issues.apache.org/jira/browse/SPARK-34546 Project: Spark Issue Type: Improvement Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim AlterViewAs.query is currently analyzed in the physical operator, but it should be analyzed during the analysis phase. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34302) Migrate ALTER TABLE .. CHANGE COLUMN to new resolution framework
[ https://issues.apache.org/jira/browse/SPARK-34302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17286858#comment-17286858 ] Terry Kim commented on SPARK-34302: --- [~maxgekk] Thanks for letting me know! > Migrate ALTER TABLE .. CHANGE COLUMN to new resolution framework > > > Key: SPARK-34302 > URL: https://issues.apache.org/jira/browse/SPARK-34302 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Maxim Gekk >Assignee: Maxim Gekk >Priority: Major > Fix For: 3.2.0 > > > # Create the Command logical node for ALTER TABLE .. CHANGE COLUMN > # Remove AlterTableAlterColumnStatement > # Remove the check verifyAlterTableType() from run() -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34388) Propogate the registered UDF names to ScalaUDAF and ScalaAggregator
Terry Kim created SPARK-34388: - Summary: Propogate the registered UDF names to ScalaUDAF and ScalaAggregator Key: SPARK-34388 URL: https://issues.apache.org/jira/browse/SPARK-34388 Project: Spark Issue Type: Improvement Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Propagate the registered UDF names to ScalaUDAF and ScalaAggregator. This can improve EXPLAIN output, etc. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34380) Support ifExists for ALTER TABLE ... UNSET TBLPROPERTIES
[ https://issues.apache.org/jira/browse/SPARK-34380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279893#comment-17279893 ] Terry Kim commented on SPARK-34380: --- Looking > Support ifExists for ALTER TABLE ... UNSET TBLPROPERTIES > > > Key: SPARK-34380 > URL: https://issues.apache.org/jira/browse/SPARK-34380 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > Support ifExists for ALTER TABLE ... UNSET TBLPROPERTIES -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34380) Support ifExists for ALTER TABLE ... UNSET TBLPROPERTIES
Terry Kim created SPARK-34380: - Summary: Support ifExists for ALTER TABLE ... UNSET TBLPROPERTIES Key: SPARK-34380 URL: https://issues.apache.org/jira/browse/SPARK-34380 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Support ifExists for ALTER TABLE ... UNSET TBLPROPERTIES -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34320) Migrate ALTER TABLE drop columns command to the new resolution framework
Terry Kim created SPARK-34320: - Summary: Migrate ALTER TABLE drop columns command to the new resolution framework Key: SPARK-34320 URL: https://issues.apache.org/jira/browse/SPARK-34320 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Migrate ALTER TABLE drop columns command to the new resolution framework -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34317) Introduce relationTypeMismatchHint to UnresolvedTable for a better error message
Terry Kim created SPARK-34317: - Summary: Introduce relationTypeMismatchHint to UnresolvedTable for a better error message Key: SPARK-34317 URL: https://issues.apache.org/jira/browse/SPARK-34317 Project: Spark Issue Type: Improvement Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim The relationTypeMismatchHint in UnresolvedTable can be used to give a hint if the resolved relation is a view. For example, for "ALTER TABLE t ...", if "t" is resolved a view, the error message will also contain a hint, "Please use ALTER VIEW instead." -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34313) Migrate ALTER TABLE SET/UNSET TBLPROPERTIES commands to the new resolution framework
[ https://issues.apache.org/jira/browse/SPARK-34313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34313: -- Description: Migrate ALTER TABLE SET/UNSET TBLPROPERTIES commands to the new resolution framework (was: Migrate ALTER TABLE SET/UNSET PROPERTIES commands to the new resolution framework) > Migrate ALTER TABLE SET/UNSET TBLPROPERTIES commands to the new resolution > framework > > > Key: SPARK-34313 > URL: https://issues.apache.org/jira/browse/SPARK-34313 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > Migrate ALTER TABLE SET/UNSET TBLPROPERTIES commands to the new resolution > framework -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34313) Migrate ALTER TABLE SET/UNSET TBLPROPERTIES commands to the new resolution framework
[ https://issues.apache.org/jira/browse/SPARK-34313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34313: -- Summary: Migrate ALTER TABLE SET/UNSET TBLPROPERTIES commands to the new resolution framework (was: Migrate ALTER TABLE SET/UNSET PROPERTIES commands to the new resolution framework) > Migrate ALTER TABLE SET/UNSET TBLPROPERTIES commands to the new resolution > framework > > > Key: SPARK-34313 > URL: https://issues.apache.org/jira/browse/SPARK-34313 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 3.2.0 >Reporter: Terry Kim >Priority: Major > > Migrate ALTER TABLE SET/UNSET PROPERTIES commands to the new resolution > framework -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34313) Migrate ALTER TABLE SET/UNSET PROPERTIES commands to the new resolution framework
Terry Kim created SPARK-34313: - Summary: Migrate ALTER TABLE SET/UNSET PROPERTIES commands to the new resolution framework Key: SPARK-34313 URL: https://issues.apache.org/jira/browse/SPARK-34313 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim Migrate ALTER TABLE SET/UNSET PROPERTIES commands to the new resolution framework -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-34299) Clean up ResolveSessionCatalog
Terry Kim created SPARK-34299: - Summary: Clean up ResolveSessionCatalog Key: SPARK-34299 URL: https://issues.apache.org/jira/browse/SPARK-34299 Project: Spark Issue Type: Improvement Components: SQL Affects Versions: 3.2.0 Reporter: Terry Kim ResolveSessionCatalog doesn't need to have isTempView and isTempFunction as the temp view/function resolution is done in Analzyer. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-34252) View subqueries in aggregate's grouping expression fail during the analysis check
[ https://issues.apache.org/jira/browse/SPARK-34252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim resolved SPARK-34252. --- Resolution: Not A Problem > View subqueries in aggregate's grouping expression fail during the analysis > check > - > > Key: SPARK-34252 > URL: https://issues.apache.org/jira/browse/SPARK-34252 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 3.1.0 >Reporter: Terry Kim >Priority: Major > > To repro: > {code:java} > sql("create temporary view ta(a, b) as select 1, 2") > sql("create temporary view tc(c, d) as select 1, 2") > sql("select a, (select sum(d) from tc where a = c) sum_d from ta group by 1, > 2").show > {code} > fails with: > {code:java} > This method should not be called in the analyzer > java.lang.RuntimeException: This method should not be called in the analyzer > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule(AnalysisHelper.scala:159) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule$(AnalysisHelper.scala:155) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.assertNotAnalysisRule(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp(AnalysisHelper.scala:179) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp$(AnalysisHelper.scala:178) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformUp(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.analysis.EliminateView$.apply(view.scala:56) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:485) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:458) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:953) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:936) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$doCanonicalize$1(QueryPlan.scala:387) > at scala.collection.immutable.List.map(List.scala:293) > ... > {code} > This works fine in Spark 3.0, and the issue seems to be brought by > https://github.com/apache/spark/pull/30567, which introduced > View.doCanonicalize() -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34252) View subqueries in aggregate's grouping expression fail during the analysis check
[ https://issues.apache.org/jira/browse/SPARK-34252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34252: -- Description: To repro: {code:java} sql("create temporary view ta(a, b) as select 1, 2") sql("create temporary view tc(c, d) as select 1, 2") sql("select a, (select sum(d) from tc where a = c) sum_d from ta group by 1, 2").show {code} fails with: {code:java} This method should not be called in the analyzer java.lang.RuntimeException: This method should not be called in the analyzer at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule(AnalysisHelper.scala:159) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule$(AnalysisHelper.scala:155) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.assertNotAnalysisRule(LogicalPlan.scala:29) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp(AnalysisHelper.scala:179) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp$(AnalysisHelper.scala:178) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformUp(LogicalPlan.scala:29) at org.apache.spark.sql.catalyst.analysis.EliminateView$.apply(view.scala:56) at org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:485) at org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:458) at org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) at org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) at org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:953) at org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:936) at org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) at org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$doCanonicalize$1(QueryPlan.scala:387) at scala.collection.immutable.List.map(List.scala:293) ... {code} This works fine in Spark 3.0, and the issue seems to be brought by https://github.com/apache/spark/pull/30567, which introduced View.doCanonicalize() was: To repro: {code:java} sql("create temporary view ta(a, b) as select 1, 2") sql("create temporary view tc(c, d) as select 1, 2") sql("select a, (select sum(d) from tc where a = c) sum_d from ta l1 group by 1, 2").show {code} fails with: {code:java} This method should not be called in the analyzer java.lang.RuntimeException: This method should not be called in the analyzer at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule(AnalysisHelper.scala:159) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule$(AnalysisHelper.scala:155) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.assertNotAnalysisRule(LogicalPlan.scala:29) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp(AnalysisHelper.scala:179) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp$(AnalysisHelper.scala:178) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformUp(LogicalPlan.scala:29) at org.apache.spark.sql.catalyst.analysis.EliminateView$.apply(view.scala:56) at org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:485) at org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:458) at org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) at org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) at org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:953) at org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:936) at org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) at org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$doCanonicalize$1(QueryPlan.scala:387) at scala.collection.immutable.List.map(List.scala:293) ... {code} This works fine in Spark 3.0, and the issue seems to be brought by https://github.com/apache/spark/pull/30567, which introduced View.doCanonicalize() > View subqueries in aggregate's grouping expression fail during the analysis > check >
[jira] [Updated] (SPARK-34252) View subqueries in aggregate's grouping expression fail during the analysis check
[ https://issues.apache.org/jira/browse/SPARK-34252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34252: -- Summary: View subqueries in aggregate's grouping expression fail during the analysis check (was: Subquery (views) in aggregate's grouping expression fails the analysis check) > View subqueries in aggregate's grouping expression fail during the analysis > check > - > > Key: SPARK-34252 > URL: https://issues.apache.org/jira/browse/SPARK-34252 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 3.1.0 >Reporter: Terry Kim >Priority: Major > > To repro: > {code:java} > sql("create temporary view ta(a, b) as select 1, 2") > sql("create temporary view tc(c, d) as select 1, 2") > sql("select a, (select sum(d) from tc where a = c) sum_d from ta l1 group by > 1, 2").show > {code} > fails with: > {code:java} > This method should not be called in the analyzer > java.lang.RuntimeException: This method should not be called in the analyzer > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule(AnalysisHelper.scala:159) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule$(AnalysisHelper.scala:155) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.assertNotAnalysisRule(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp(AnalysisHelper.scala:179) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp$(AnalysisHelper.scala:178) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformUp(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.analysis.EliminateView$.apply(view.scala:56) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:485) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:458) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:953) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:936) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$doCanonicalize$1(QueryPlan.scala:387) > at scala.collection.immutable.List.map(List.scala:293) > ... > {code} > This works fine in Spark 3.0, and the issue seems to be brought by > https://github.com/apache/spark/pull/30567, which introduced > View.doCanonicalize() -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34252) View subqueries in aggregate's grouping expression fail during the analysis check
[ https://issues.apache.org/jira/browse/SPARK-34252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272295#comment-17272295 ] Terry Kim commented on SPARK-34252: --- [~cloud_fan], [~dongjoon] Would this be a blocker for 3.1 release? > View subqueries in aggregate's grouping expression fail during the analysis > check > - > > Key: SPARK-34252 > URL: https://issues.apache.org/jira/browse/SPARK-34252 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 3.1.0 >Reporter: Terry Kim >Priority: Major > > To repro: > {code:java} > sql("create temporary view ta(a, b) as select 1, 2") > sql("create temporary view tc(c, d) as select 1, 2") > sql("select a, (select sum(d) from tc where a = c) sum_d from ta l1 group by > 1, 2").show > {code} > fails with: > {code:java} > This method should not be called in the analyzer > java.lang.RuntimeException: This method should not be called in the analyzer > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule(AnalysisHelper.scala:159) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule$(AnalysisHelper.scala:155) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.assertNotAnalysisRule(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp(AnalysisHelper.scala:179) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp$(AnalysisHelper.scala:178) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformUp(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.analysis.EliminateView$.apply(view.scala:56) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:485) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:458) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:953) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:936) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$doCanonicalize$1(QueryPlan.scala:387) > at scala.collection.immutable.List.map(List.scala:293) > ... > {code} > This works fine in Spark 3.0, and the issue seems to be brought by > https://github.com/apache/spark/pull/30567, which introduced > View.doCanonicalize() -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-34252) Subquery (views) in aggregate's grouping expression fails the analysis check
[ https://issues.apache.org/jira/browse/SPARK-34252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Terry Kim updated SPARK-34252: -- Summary: Subquery (views) in aggregate's grouping expression fails the analysis check (was: Subquery in aggregate's grouping expression fails the analysis check) > Subquery (views) in aggregate's grouping expression fails the analysis check > > > Key: SPARK-34252 > URL: https://issues.apache.org/jira/browse/SPARK-34252 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 3.1.0 >Reporter: Terry Kim >Priority: Major > > To repro: > {code:java} > sql("create temporary view ta(a, b) as select 1, 2") > sql("create temporary view tc(c, d) as select 1, 2") > sql("select a, (select sum(d) from tc where a = c) sum_d from ta l1 group by > 1, 2").show > {code} > fails with: > {code:java} > This method should not be called in the analyzer > java.lang.RuntimeException: This method should not be called in the analyzer > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule(AnalysisHelper.scala:159) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule$(AnalysisHelper.scala:155) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.assertNotAnalysisRule(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp(AnalysisHelper.scala:179) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp$(AnalysisHelper.scala:178) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformUp(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.analysis.EliminateView$.apply(view.scala:56) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:485) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:458) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:953) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:936) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$doCanonicalize$1(QueryPlan.scala:387) > at scala.collection.immutable.List.map(List.scala:293) > ... > {code} > This works fine in Spark 3.0, and the issue seems to be brought by > https://github.com/apache/spark/pull/30567, which introduced > View.doCanonicalize() -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34252) Subquery in aggregate's grouping expression fails the analysis check
[ https://issues.apache.org/jira/browse/SPARK-34252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272294#comment-17272294 ] Terry Kim commented on SPARK-34252: --- I am working on the fix. > Subquery in aggregate's grouping expression fails the analysis check > > > Key: SPARK-34252 > URL: https://issues.apache.org/jira/browse/SPARK-34252 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 3.1.0 >Reporter: Terry Kim >Priority: Major > > To repro: > {code:java} > sql("create temporary view ta(a, b) as select 1, 2") > sql("create temporary view tc(c, d) as select 1, 2") > sql("select a, (select sum(d) from tc where a = c) sum_d from ta l1 group by > 1, 2").show > {code} > fails with: > {code:java} > This method should not be called in the analyzer > java.lang.RuntimeException: This method should not be called in the analyzer > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule(AnalysisHelper.scala:159) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule$(AnalysisHelper.scala:155) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.assertNotAnalysisRule(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp(AnalysisHelper.scala:179) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp$(AnalysisHelper.scala:178) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformUp(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.analysis.EliminateView$.apply(view.scala:56) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:485) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:458) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:953) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:936) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$doCanonicalize$1(QueryPlan.scala:387) > at scala.collection.immutable.List.map(List.scala:293) > ... > {code} > This works fine in Spark 3.0, and the issue seems to be brought by > https://github.com/apache/spark/pull/30567, which introduced > View.doCanonicalize() -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org