kbendick commented on pull request #3162:
URL: https://github.com/apache/iceberg/pull/3162#issuecomment-924270962


   This brings up a larger issue (that is outside of the scope of this PR) that 
@kainoa21 ecently brought up recently on Slack and elsewhere, that we also 
experience when using `GlueCatalog` with Flink - that the custom catalog-impl's 
don't necessarily neatly fall into the `Hadoop` or `Hive` mindset.
   
   In the case he brought up, he wanted to be able to use `GlueCatalog` in an 
AWS managed environment without having to have hadoop dependencies on the 
classpath. The only reason he needed them there was to satisfy some unused code 
constraints.
   
   He felt that possibly it was a bit of a leaky abstraction in some places. 
His feeling was that maybe there should be a `GlueCatalogFileIO` or possibly 
even multiple possible `GlueCatalogFileIO` types - depending if one wanted to 
use Hive or not etc.
   
   As this seems to be an offshoot of the same problem, tagging him and linking 
the issue for future reference: https://github.com/apache/iceberg/issues/3044
   
   This is outside of the scope of this PR and I don't mean to block this PR, 
but this might be something we want to consider during V3 planning / as part of 
a larger scope of things as we do seem to be encountering some issues with what 
I'll call the `well known "custom" catalogs`? cc @rdblue (in case Jason doesn't 
come on Github as much) and @jackye1995 who was also working on this problem at 
the time.
   
   We can continue this conversation elsewhere, sorry to thread-jack / PR-jack 
the discussion! 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to