Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/16944 Hi, @budde and @cloud-fan . I met the following situation with Apache master after this commit. Could you check the following case? Previously, Apache Spark shows the correct result. ```scala sql("CREATE TABLE t1(a string, b string) PARTITIONED BY (day string, hour string) STORED AS PARQUET").show sql("INSERT INTO TABLE t1 PARTITION (day = '1', hour = '01' ) VALUES (100, 200)").show sql("SELECT a, b FROM t1").show +---+---+ | a| b| +---+---+ |100|200| +---+---+ ``` ```sql hive> ALTER TABLE t1 ADD COLUMNS (dummy string); ``` ```scala sql("SELECT a, b FROM t1").show org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table. partition keys can not be changed. ... +---+----+ | a| b| +---+----+ |100|null| +---+----+ ```
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org