Hi,

When using spark.sql() to perform alter table operations I found that spark
changes the table owner property to the execution user. Then I digged into
the source code and found that in HiveClientImpl, the alterTable function
will set the owner of table to the current execution user. Besides, some
other operations like alter partition, getPartitionOption and so on do the
same. Can some experts explain why should we do this? What if just behave
like hive which does not change the owner when doing these kind of
operations?

Reply via email to