RussellSpitzer commented on PR #5327:
URL: https://github.com/apache/iceberg/pull/5327#issuecomment-1207496070

   @rdblue The issue with those solutions for our use case is that they only 
work if the current metadata is valid. If not, we cannot drop the table via the 
Catalog api (requires loading the metadata first) which has led me to 
constantly showing folks how to access the hive via the Spark external client 
wrapper. The same issue occurs if we want to make this a table operation, we 
can't perform an operation on a table which doesn't have a valid metadata.json.
   
   Our issue is that in Spark we can't Drop the table or point to a new 
metadata without using an api for the metastore directly. Ideally I think the 
Iceberg catalog implementation should provide an exit valve for these 
situations so users aren't required to manually connect to the underlying 
catalog system themselves. 
   
   I am open to other solutions here but I'm not very worried about folks 
overusing this api, it tends to only come up when the user is already in a 
really bad state.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to