XBaith opened a new issue, #2823:
URL: https://github.com/apache/amoro/issues/2823
### What happened?
Cannot execute sql statement in local terminal with glue catalog type,
because when execute sql statement and the catalog type will be put incorrectly.
### Affects Versions
master/0.7.x
### What engines are you seeing the problem on?
AMS
### How to reproduce
1. create a glue server catalog
2. execute a sql statment in local terminal
### Relevant log output
```shell
2024/05/10 10:24:01 new sql script submit, current thread pool state.
[Active: 1, PoolSize: 1]
2024/05/10 10:24:01 terminal session dose not exists. create session first
2024/05/10 10:24:04 create a new terminal session.
2024/05/10 10:24:04 fetch terminal session:
node0zracjqdifb8112db5yiygyyb60.node0-null-null-glue
setup session, session factory:
org.apache.amoro.server.terminal.local.LocalSessionFactory
spark.sql.catalog.glue.table.self-optimizing.group default
spark.sql.catalog.glue.table-formats ICEBERG
spark.sql.catalog.glue.warehouse s3://dummy
spark.sql.catalog.glue.type glue
spark.sql.catalog.glue org.apache.iceberg.spark.SparkCatalog
spark.sql.arctic.refresh-catalog-before-usage true
2024/05/10 10:24:04 session configuration: catalog-url-base =>
thrift://127.0.0.1:1260
2024/05/10 10:24:04 session configuration:
spark.sql.arctic.refresh-catalog-before-usage => true
2024/05/10 10:24:04 session configuration: session.catalog.glue.connector =>
iceberg
2024/05/10 10:24:04 session configuration: spark.sql.catalog.glue.type =>
glue
2024/05/10 10:24:04 session configuration:
catalog.glue.table.self-optimizing.group => default
2024/05/10 10:24:04 session configuration: session.catalogs => glue
2024/05/10 10:24:04 session configuration: catalog.glue.warehouse =>
s3://dummy
2024/05/10 10:24:04 session configuration: spark.sql.catalog.glue =>
org.apache.iceberg.spark.SparkCatalog
2024/05/10 10:24:04 session configuration: catalog.glue.table-formats =>
ICEBERG
2024/05/10 10:24:04 session configuration: catalog.glue.type => glue
2024/05/10 10:24:04 session configuration: spark.sql.catalog.glue.warehouse
=> s3://dummy
2024/05/10 10:24:04 session configuration: session.fetch-size => 1000
2024/05/10 10:24:04 session configuration:
spark.sql.catalog.glue.table.self-optimizing.group => default
2024/05/10 10:24:04 session configuration:
spark.sql.catalog.glue.table-formats => ICEBERG
2024/05/10 10:24:04
2024/05/10 10:24:04 prepare execute statement, line:1
2024/05/10 10:24:04 show databases
2024/05/10 10:24:04 meet exception during execution.
2024/05/10 10:24:04 java.lang.UnsupportedOperationException: Unknown catalog
type: glue
at
org.apache.iceberg.CatalogUtil.buildIcebergCatalog(CatalogUtil.java:272)
at
org.apache.iceberg.spark.SparkCatalog.buildIcebergCatalog(SparkCatalog.java:144)
at
org.apache.iceberg.spark.SparkCatalog.initialize(SparkCatalog.java:552)
at
org.apache.spark.sql.connector.catalog.Catalogs$.load(Catalogs.scala:60)
at
org.apache.spark.sql.connector.catalog.CatalogManager.$anonfun$catalog$1(CatalogManager.scala:53)
at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86)
at
org.apache.spark.sql.connector.catalog.CatalogManager.catalog(CatalogManager.scala:53)
at
org.apache.spark.sql.connector.catalog.LookupCatalog$CatalogAndNamespace$.unapply(LookupCatalog.scala:86)
at
org.apache.spark.sql.catalyst.analysis.ResolveCatalogs$$anonfun$apply$1.applyOrElse(ResolveCatalogs.scala:33)
at
org.apache.spark.sql.catalyst.analysis.ResolveCatalogs$$anonfun$apply$1.applyOrElse(ResolveCatalogs.scala:32)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$2(AnalysisHelper.scala:170)
at
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:170)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:30)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$4(AnalysisHelper.scala:175)
at
org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1228)
at
org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1227)
at
org.apache.spark.sql.catalyst.plans.logical.SetCatalogAndNamespace.mapChildren(v2Commands.scala:752)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:175)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:30)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsWithPruning(AnalysisHelper.scala:99)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsWithPruning$(AnalysisHelper.scala:96)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsWithPruning(LogicalPlan.scala:30)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:76)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:75)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:30)
at
org.apache.spark.sql.catalyst.analysis.ResolveCatalogs.apply(ResolveCatalogs.scala:32)
at
org.apache.spark.sql.catalyst.analysis.ResolveCatalogs.apply(ResolveCatalogs.scala:28)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:211)
at
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:91)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:208)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:200)
at scala.collection.immutable.List.foreach(List.scala:431)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:200)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:231)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:227)
at
org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:173)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:227)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:188)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:179)
at
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:179)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:212)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:211)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:76)
at
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:185)
at
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:185)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
at
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:184)
at
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:76)
at
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:74)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:66)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
at
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
at
org.apache.amoro.server.terminal.local.LocalTerminalSession.executeStatement(LocalTerminalSession.java:72)
at
org.apache.amoro.server.terminal.TerminalSessionContext$ExecutionTask.executeStatement(TerminalSessionContext.java:281)
at
org.apache.amoro.server.terminal.TerminalSessionContext$ExecutionTask.execute(TerminalSessionContext.java:245)
at
org.apache.amoro.server.terminal.TerminalSessionContext$ExecutionTask.lambda$get$0(TerminalSessionContext.java:206)
at org.apache.amoro.table.TableMetaStore.call(TableMetaStore.java:234)
at org.apache.amoro.table.TableMetaStore.doAs(TableMetaStore.java:207)
at
org.apache.amoro.server.terminal.TerminalSessionContext$ExecutionTask.get(TerminalSessionContext.java:196)
at
org.apache.amoro.server.terminal.TerminalSessionContext$ExecutionTask.get(TerminalSessionContext.java:171)
at
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
```
```
### Anything else

### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]