nastra commented on code in PR #12892: URL: https://github.com/apache/iceberg/pull/12892#discussion_r2189328221
########## spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/SparkCatalogConfig.java: ########## @@ -29,22 +29,27 @@ public enum SparkCatalogConfig { SparkCatalog.class.getName(), ImmutableMap.of( "type", "hive", - "default-namespace", "default")), + "default-namespace", "default", + "unique-table-location", "true")), HADOOP( "testhadoop", SparkCatalog.class.getName(), ImmutableMap.of("type", "hadoop", "cache-enabled", "false")), REST( "testrest", SparkCatalog.class.getName(), - ImmutableMap.of("type", "rest", "cache-enabled", "false")), + ImmutableMap.of( + "type", "rest", + "cache-enabled", "false", + "unique-table-location", "true")), SPARK( "spark_catalog", SparkSessionCatalog.class.getName(), ImmutableMap.of( "type", "hive", "default-namespace", "default", "parquet-enabled", "true", + "unique-table-location", "true", Review Comment: what I mean is that you should add a separate catalog configuration such as ``` SPARK_WITH_UNIQUE_LOCATION( "spark_with_unique_location", SparkCatalog.class.getName(), ImmutableMap.of( "type", "rest", "cache-enabled", "false", // Spark will delete tables using v1, leaving the cache out of sync "unique-table-location", "true")) ``` Then you would only execute the test with this catalog. It's probably best to create a separate test class anyway -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org