dramaticlly commented on issue #11741:
URL: https://github.com/apache/iceberg/issues/11741#issuecomment-2533365254
I think use this we can reproduce the problem in a unit test from
`org.apache.iceberg.spark.sql.TestCreateTable` as it support all of
hive/spark/hadoop/rest catalog, I think rest and hadoop catalog will fail while
hive and spark will pass
```java
@TestTemplate
public void testCreateTable() {
assumeThat(catalogName).isEqualTo(SparkCatalogConfig.REST.catalogName());
assertThat(validationCatalog.tableExists(tableIdent))
.as("Table should not already exist")
.isFalse();
sql("CREATE TABLE %s (id BIGINT NOT NULL, data STRING) USING iceberg",
tableName);
Table table = validationCatalog.loadTable(tableIdent);
assertThat(table).as("Should load the new table").isNotNull();
StructType expectedSchema =
StructType.of(
NestedField.required(1, "id", Types.LongType.get()),
NestedField.optional(2, "data", Types.StringType.get()));
assertThat(table.schema().asStruct())
.as("Should have the expected schema")
.isEqualTo(expectedSchema);
assertThat(table.spec().fields()).as("Should not be
partitioned").hasSize(0);
assertThat(table.properties().get(TableProperties.DEFAULT_FILE_FORMAT))
.as("Should not have the default format set")
.isNull();
spark.sessionState().catalogManager().setCurrentCatalog(catalogName);
assertThat(spark.catalog().tableExists(tableIdent.toString())).isTrue();
// success
assertThat(spark.catalog().tableExists(tableIdent.namespace().toString(),
tableIdent.name())).isTrue(); //failure
}
```
I am wondering if anyone run into such when using REST base catalog?
CC @RussellSpitzer @flyrain
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]