nerstak commented on issue #10227:
URL: https://github.com/apache/iceberg/issues/10227#issuecomment-2457944896
Hello!
With the following use case, it does not seems to be feasible. Is there an
alternative?
```scala
scala> import org.apache.spark.sql.SparkSession
val sc = SparkSession.builder()
.config("spark.sql.extensions",
"org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
.config(s"spark.sql.catalog.myCatalog",
"org.apache.iceberg.spark.SparkCatalog")
.config(s"spark.sql.catalog.myCatalog.warehouse",
"s3a://path/to/warehouse")
.config(s"spark.sql.defaultCatalog", "myCatalog")
.config(s"spark.sql.catalog.myCatalog.type", "hive")
.config(s"spark.sql.catalog.myCatalog.uri", "thrift://1.2.3.4:9083")
.getOrCreate()
scala> sc.conf.isModifiable("spark.sql.catalog.myCatalog.uri")
res2: Boolean = false
scala> sc.sql("show schemas")
// Whatever results
scala> sc.conf.set(s"spark.sql.catalog.myCatalog.uri",
"thrift://5.6.7.8:9083")
// Changes from here are not applied to already initialized catalog
"myCatalog". It will still use the previous conf
```
The `REFRESH TABLE` procedure does not seems to work either on my side
(Spark 3.5, Iceberg 1.5).
Regards.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]