Github user gatorsmile commented on the issue:

    https://github.com/apache/spark/pull/18849
  
    > If that is not the point of "Hive compatibility", then there is no point 
in creating data source tables in a Hive compatible way to start with. Just 
always create them as "not Hive compatible" because then Spark is free to do 
whatever it wants with them.
    
    For most usage scenarios of Spark native file source tables, they do not 
use Hive to query the tables. Thus, breaking/maintaining Hive compatibility 
will not affect them. Their DDL commands on the data source tables should not 
be blocked even if Hive metastore complains it. 
    
    For Hive users who want to query Spark native file source tables, we can 
introduce the property like `DATASOURCE_HIVE_COMPATIBLE` for ensuring the Hive 
compatibility will not be broken in the whole life cycle of these tables. This 
property has to be manually set by users, instead of adding by Spark SQL.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to