cloud-fan commented on a change in pull request #26741: [SPARK-30104][SQL] Fix catalog resolution for 'global_temp' URL: https://github.com/apache/spark/pull/26741#discussion_r355825839
########## File path: sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2SQLSuite.scala ########## @@ -1813,6 +1813,26 @@ class DataSourceV2SQLSuite } } + test("SPARK-30104: global temp db is used as a table name under v2 catalog") { + val globalTempDB = spark.sessionState.conf.getConf(StaticSQLConf.GLOBAL_TEMP_DATABASE) + val t = s"testcat.$globalTempDB" + withTable(t) { + sql(s"CREATE TABLE $t (id bigint, data string) USING foo") + sql("USE testcat") + // The following should not throw AnalysisException, but should use `testcat.$globalTempDB`. + sql(s"DESCRIBE TABLE $globalTempDB") + } + } + + test("table name same as catalog can be used") { + withTable("testcat.testcat") { + sql(s"CREATE TABLE testcat.testcat (id bigint, data string) USING foo") + sql("USE testcat") + // The following should not throw AnalysisException. + sql(s"DESCRIBE TABLE testcat") + } Review comment: This is expected. We can add many special cases to the resolution policy, but it's better to keep the policy simple. According to the original design, the first name part should be resolved to catalog first, even if it ends up with table not found. I think the single-part name should be the only special case, as table name can't be an empty string array. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org