Hi, Kudos on Spark 1.3.x, it's a great release - loving data frames! One thing I noticed after upgrading is that if I use the generic save DataFrame function with Overwrite mode and a "parquet" source it produces much larger output parquet file.
Source json data: ~500GB Originally saved parquet: ~30GB to 1000 partitions Overwritten parquet: ~90GB to 1000 partitions Now the really strange thing is that if I overwrite that parquet again it will again be ~30GB for 1000 parts. How can I get a consistent behaviour with this? The overwrite mode is very useful for my use-case. Thanks, Borislav -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/DataFrame-save-with-SaveMode-Overwrite-produces-3x-higher-data-size-tp23245.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org