rdblue edited a comment on pull request #1505:
URL: https://github.com/apache/iceberg/pull/1505#issuecomment-700948856


   Spark is using a v1 command to drop the table: 
`org.apache.spark.sql.execution.command.DropTableCommand`. That is unexpected 
because the test should be using an Iceberg implementation of Spark's 
`TableCatalog` for all of these operations. Can you find out what catalog was 
being used? My guess is that it was the built-in 
[`spark_catalog`](https://github.com/apache/iceberg/blob/1772f4f27b8a12d3e89a7f65b8b600b717e1f09d/spark3/src/test/java/org/apache/iceberg/spark/SparkCatalogTestBase.java#L66-L71)
 and that somehow the session catalog wrapper that we injected did not 
correctly detect that this table is Iceberg and not Hive.
   
   So the question is probably why is 
[`loadTable`](https://github.com/apache/iceberg/blob/1772f4f27b8a12d3e89a7f65b8b600b717e1f09d/spark3/src/main/java/org/apache/iceberg/spark/SparkSessionCatalog.java#L116-L122)
 catching `NoSuchTableException` and returning the table through Hive?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to