I believe it depends on your Spark application. To write to Hive, use dataframe.saveAsTable
To write to S3, use dataframe.write.parquet(“s3://<bucket>”) Hope this helps. Richard > On Jun 16, 2016, at 9:54 AM, Natu Lauchande <nlaucha...@gmail.com> wrote: > > Does