Michael, is there an example anywhere that demonstrates how this works with the schema changing over time?
Must the Hive tables be set up as external tables outside of saveAsTable? In my experience, in 1.4.1, writing to a table with SaveMode.Append fails if the schema don't match. Thanks, Sim From: Michael Armbrust <mich...@databricks.com<mailto:mich...@databricks.com>> Date: Monday, August 10, 2015 at 2:36 PM To: Simeon Simeonov <s...@swoop.com<mailto:s...@swoop.com>> Cc: user <user@spark.apache.org<mailto:user@spark.apache.org>> Subject: Re: Spark inserting into parquet files with different schema Older versions of Spark (i.e. when it was still called SchemaRDD instead of DataFrame) did not support merging different parquet schema. However, Spark 1.4+ should. On Sat, Aug 8, 2015 at 8:58 PM, sim <s...@swoop.com<mailto:s...@swoop.com>> wrote: Adam, did you find a solution for this? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-inserting-into-parquet-files-with-different-schema-tp20706p24181.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org> For additional commands, e-mail: user-h...@spark.apache.org<mailto:user-h...@spark.apache.org>