pvary commented on pull request #1505:
URL: https://github.com/apache/iceberg/pull/1505#issuecomment-701442789


   > Spark is using a v1 command to drop the table: 
`org.apache.spark.sql.execution.command.DropTableCommand`. That is unexpected 
because the test should be using an Iceberg implementation of Spark's 
`TableCatalog` for all of these operations. Can you find out what catalog was 
being used? My guess is that it was the built-in 
[`spark_catalog`](https://github.com/apache/iceberg/blob/1772f4f27b8a12d3e89a7f65b8b600b717e1f09d/spark3/src/test/java/org/apache/iceberg/spark/SparkCatalogTestBase.java#L66-L71)
 and that somehow the session catalog wrapper that we injected did not 
correctly detect that this table is Iceberg and not Hive.
   > 
   > So the question is probably why is 
[`loadTable`](https://github.com/apache/iceberg/blob/1772f4f27b8a12d3e89a7f65b8b600b717e1f09d/spark3/src/main/java/org/apache/iceberg/spark/SparkSessionCatalog.java#L116-L122)
 catching `NoSuchTableException` and returning the table through Hive?
   
   I did not have too much time to play around with this 😢, and I was not able 
to catch where the `DROP TABLE` command was hijacked yet.
   What I have confirmed:
   - It is only happening with `spark_catalog`
   - It is only for `DROP TABLE` - I have not found problems with another sql 
command
   
   What I have done:
   - Removed the dependencies
   
   Since this is an unrelated to the original change, we might want to file an 
Issue for it.
   
   What do you think @rdblue?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to