The  schema merging
<http://spark.apache.org/docs/latest/sql-programming-guide.html#schema-merging> 
 
section of the Spark SQL documentation shows an example of schema evolution
in a partitioned table. 
Is this functionality only available when creating a Spark SQL table? 
dataFrameWithEvolvedSchema.saveAsTable("my_table", SaveMode.Append) fails
with 
java.lang.RuntimeException: Relation[ ... ]
org.apache.spark.sql.parquet.ParquetRelation2@83a73a05 requires that the
query in the SELECT clause of the INSERT INTO/OVERWRITE statement generates
the same number of columns as its schema.
What is the Spark SQL idiom for appending data to a table while managing
schema evolution?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Schema-evolution-in-tables-tp23999.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to