[ https://issues.apache.org/jira/browse/SPARK-16946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-16946. ---------------------------------- Resolution: Cannot Reproduce I am resolving this per https://github.com/apache/spark/pull/14535#issuecomment-309930981 but I don't know which JIRA fixes it. Please fix my change on Resolution if anyone know. > saveAsTable[append] with different number of columns should throw Exception > --------------------------------------------------------------------------- > > Key: SPARK-16946 > URL: https://issues.apache.org/jira/browse/SPARK-16946 > Project: Spark > Issue Type: Bug > Components: SQL > Reporter: Huaxin Gao > Priority: Minor > > In HiveContext, if saveAsTable[append] has different number of columns, Spark > will throw Exception. > e.g. > {code} > test("saveAsTable[append]: too many columns") { > withTable("saveAsTable_too_many_columns") { > Seq((1, 2)).toDF("i", > "j").write.saveAsTable("saveAsTable_too_many_columns") > val e = intercept[AnalysisException] { > Seq((3, 4, 5)).toDF("i", "j", > "k").write.mode("append").saveAsTable("saveAsTable_too_many_columns") > } > assert(e.getMessage.contains("doesn't match")) > } > } > {code} > However, in SparkSession or SQLContext, if use the above code example, the > extra column in the append data will be removed silently without any warning > or Exception. The table becomes > i j > 3 4 > 1 2 > We may want follow the HiveContext behavior and throw Exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org