TangYan-1 opened a new issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005


   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the 
[issues](https://github.com/apache/incubator-kyuubi/issues?q=is%3Aissue) and 
found no similar issues.
   
   
   ### Describe the bug
   
   `set spark.sql.catalog.spark_catalog = 
org.apache.iceberg.spark.SparkSessionCatalog;
   set spark.sql.catalog.spark_catalog.type = hive;
   set spark.sql.catalog.spark_catalog.uri = thrift://hivemestore_host:9083;
   create table spark_catalog.default.testtable(key int) using iceberg;`
   
   The above queries can succeed in spark3 shell job, but it got the below 
exception in kyuubi beeline. 
   `Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Version 
information not found in metastore.
           at 
org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:8066) 
~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:8043)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_322]
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_322]
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_322]
           at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
           at 
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) 
~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at com.sun.proxy.$Proxy43.verifySchema(Unknown Source) ~[?:?]
           at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:655)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:648)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:717)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:420)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7036)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:254)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[?:1.8.0_322]
           at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0_322]
           at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0_322]
           at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
~[?:1.8.0_322]
           at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1773)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_322]
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_322]
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_322]
           at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
           at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
           at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
           at 
org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
           at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:55) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]`
   
   ### Affects Version(s)
   
   1.4.1-incubating
   
   ### Kyuubi Server Log Output
   
   ```logtalk
   14:14:42.253 [SparkSQLSessionManager-exec-pool: Thread-93] ERROR 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement - Error operating 
EXECUTE_STATEMENT: org.apache.iceberg.hive.RuntimeMetaException: Failed to 
connect to Hive Metastore
        at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:72)
        at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:35)
        at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
        at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
        at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
        at 
org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:76)
        at 
org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:188)
        at 
org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:95)
        at 
org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:78)
        at 
org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:42)
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2344)
        at 
java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2342)
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2325)
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
        at org.apache.iceberg.CachingCatalog.loadTable(CachingCatalog.java:161)
        at org.apache.iceberg.spark.SparkCatalog.load(SparkCatalog.java:488)
        at 
org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:135)
        at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:92)
        at 
org.apache.iceberg.spark.SparkSessionCatalog.loadTable(SparkSessionCatalog.java:118)
        at 
org.apache.spark.sql.connector.catalog.TableCatalog.tableExists(TableCatalog.java:119)
        at 
org.apache.spark.sql.execution.datasources.v2.CreateTableExec.run(CreateTableExec.scala:39)
        at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:40)
        at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:40)
        at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:46)
        at 
org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228)
        at 
org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228)
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
        at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610)
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.$anonfun$executeStatement$1(ExecuteStatement.scala:100)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.withLocalProperties(ExecuteStatement.scala:159)
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.org$apache$kyuubi$engine$spark$operation$ExecuteStatement$$executeStatement(ExecuteStatement.scala:94)
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:127)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
   Caused by: java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
        at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1775)
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65)
        at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77)
        at 
org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196)
        at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:55)
        ... 50 more
   Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1773)
        ... 62 more
   Caused by: MetaException(message:Version information not found in metastore. 
)
        at 
org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:8066)
        at 
org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:8043)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
        at com.sun.proxy.$Proxy43.verifySchema(Unknown Source)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:655)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:648)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:717)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:420)
        at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
        at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7036)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:254)
        ... 67 more
   
   org.apache.iceberg.hive.RuntimeMetaException: Failed to connect to Hive 
Metastore
        at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:72) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:35) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:76) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:188)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:95)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:78)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:42) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2344)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) 
~[?:1.8.0_322]
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2342)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2325)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at org.apache.iceberg.CachingCatalog.loadTable(CachingCatalog.java:161) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at org.apache.iceberg.spark.SparkCatalog.load(SparkCatalog.java:488) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:135) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:92) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.spark.SparkSessionCatalog.loadTable(SparkSessionCatalog.java:118)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.spark.sql.connector.catalog.TableCatalog.tableExists(TableCatalog.java:119)
 ~[spark-catalyst_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.execution.datasources.v2.CreateTableExec.run(CreateTableExec.scala:39)
 ~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:40)
 ~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:40)
 ~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:46)
 ~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
 ~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
 ~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
 ~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
 ~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610) 
~[spark-sql_2.12-3.1.1.jar:3.1.1]
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.$anonfun$executeStatement$1(ExecuteStatement.scala:100)
 ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) 
[scala-library-2.12.10.jar:?]
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.withLocalProperties(ExecuteStatement.scala:159)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.org$apache$kyuubi$engine$spark$operation$ExecuteStatement$$executeStatement(ExecuteStatement.scala:94)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:127)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_322]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_322]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_322]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_322]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
   Caused by: java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
        at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1775)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_322]
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_322]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_322]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
        at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:55) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        ... 50 more
   Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[?:1.8.0_322]
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0_322]
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0_322]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
~[?:1.8.0_322]
        at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1773)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_322]
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_322]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_322]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
        at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:55) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        ... 50 more
   Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Version 
information not found in metastore. 
        at 
org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:8066) 
~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:8043)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_322]
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_322]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_322]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
        at 
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) 
~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at com.sun.proxy.$Proxy43.verifySchema(Unknown Source) ~[?:?]
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:655)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:648)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:717)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:420)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7036)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:254)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[?:1.8.0_322]
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0_322]
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0_322]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
~[?:1.8.0_322]
        at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1773)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94)
 ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_322]
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_322]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_322]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
        at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65)
 ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:55) 
~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
        ... 50 more
   14:16:10.565 [SparkThriftBinaryFrontendServiceHandler-Pool: Thread-85] ERROR 
org.apache.kyuubi.engine.spark.SparkThriftBinaryFrontendService - Error closing 
operation: 
   org.apache.kyuubi.KyuubiSQLException: Invalid OperationHandle 
[type=EXECUTE_STATEMENT, identifier: d2569705-5367-4533-851e-19ee9f920dfa]
        at 
org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) 
~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.operation.OperationManager.getOperation(OperationManager.scala:81)
 ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.service.AbstractBackendService.closeOperation(AbstractBackendService.scala:148)
 ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.service.ThriftBinaryFrontendService.CloseOperation(ThriftBinaryFrontendService.scala:450)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1797)
 [hive-service-rpc-3.1.2.jar:3.1.2]
        at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1782)
 [hive-service-rpc-3.1.2.jar:3.1.2]
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) 
[libthrift-0.12.0.jar:0.12.0]
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
[libthrift-0.12.0.jar:0.12.0]
        at 
org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310)
 [libthrift-0.12.0.jar:0.12.0]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_322]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_322]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
   14:16:10.575 [SparkThriftBinaryFrontendServiceHandler-Pool: Thread-85] ERROR 
org.apache.kyuubi.engine.spark.SparkThriftBinaryFrontendService - Error closing 
operation: 
   org.apache.kyuubi.KyuubiSQLException: Invalid OperationHandle 
[type=EXECUTE_STATEMENT, identifier: 6f202cfd-ba4b-4b74-add8-39724d4a4041]
        at 
org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) 
~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.operation.OperationManager.getOperation(OperationManager.scala:81)
 ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.service.AbstractBackendService.closeOperation(AbstractBackendService.scala:148)
 ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.service.ThriftBinaryFrontendService.CloseOperation(ThriftBinaryFrontendService.scala:450)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1797)
 [hive-service-rpc-3.1.2.jar:3.1.2]
        at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1782)
 [hive-service-rpc-3.1.2.jar:3.1.2]
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) 
[libthrift-0.12.0.jar:0.12.0]
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
[libthrift-0.12.0.jar:0.12.0]
        at 
org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310)
 [libthrift-0.12.0.jar:0.12.0]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_322]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_322]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
   14:16:10.577 [SparkThriftBinaryFrontendServiceHandler-Pool: Thread-85] ERROR 
org.apache.kyuubi.engine.spark.SparkThriftBinaryFrontendService - Error closing 
operation: 
   org.apache.kyuubi.KyuubiSQLException: Invalid OperationHandle 
[type=EXECUTE_STATEMENT, identifier: b35a382f-82a7-4cea-9187-b675cfebcae7]
        at 
org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) 
~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.operation.OperationManager.getOperation(OperationManager.scala:81)
 ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.service.AbstractBackendService.closeOperation(AbstractBackendService.scala:148)
 ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.service.ThriftBinaryFrontendService.CloseOperation(ThriftBinaryFrontendService.scala:450)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1797)
 [hive-service-rpc-3.1.2.jar:3.1.2]
        at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1782)
 [hive-service-rpc-3.1.2.jar:3.1.2]
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) 
[libthrift-0.12.0.jar:0.12.0]
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
[libthrift-0.12.0.jar:0.12.0]
        at 
org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310)
 [libthrift-0.12.0.jar:0.12.0]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_322]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_322]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
   14:18:10.608 [SparkThriftBinaryFrontendServiceHandler-Pool: Thread-85] ERROR 
org.apache.kyuubi.engine.spark.SparkThriftBinaryFrontendService - Error closing 
session: 
   org.apache.kyuubi.KyuubiSQLException: Invalid SessionHandle 
[76dafc6d-0bbd-4d80-a49a-76ffff5c9ce9]
        at 
org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) 
~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.session.SessionManager.closeSession(SessionManager.scala:90) 
~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.engine.spark.session.SparkSQLSessionManager.closeSession(SparkSQLSessionManager.scala:99)
 ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.service.AbstractBackendService.closeSession(AbstractBackendService.scala:49)
 ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.kyuubi.service.ThriftBinaryFrontendService.CloseSession(ThriftBinaryFrontendService.scala:221)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseSession.getResult(TCLIService.java:1517)
 [hive-service-rpc-3.1.2.jar:3.1.2]
        at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseSession.getResult(TCLIService.java:1502)
 [hive-service-rpc-3.1.2.jar:3.1.2]
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) 
[libthrift-0.12.0.jar:0.12.0]
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
[libthrift-0.12.0.jar:0.12.0]
        at 
org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36)
 [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
        at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310)
 [libthrift-0.12.0.jar:0.12.0]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_322]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_322]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
   ```
   
   
   ### Kyuubi Engine Log Output
   
   _No response_
   
   ### Kyuubi Server Configurations
   
   _No response_
   
   ### Kyuubi Engine Configurations
   
   _No response_
   
   ### Additional context
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to