As long as all your data is being inserted by Spark , hence using the same
hash partitioner,  what Fengdong mentioned should work.

On Wed, Dec 2, 2015 at 9:32 AM, Fengdong Yu <fengdo...@everstring.com>
wrote:

> Hi
> you can try:
>
> if your table under location “/test/table/“ on HDFS
> and has partitions:
>
>  “/test/table/dt=2012”
>  “/test/table/dt=2013”
>
> df.write.mode(SaveMode.Append).partitionBy("date”).save(“/test/table")
>
>
>
> On Dec 2, 2015, at 10:50 AM, Isabelle Phan <nlip...@gmail.com> wrote:
>
> df.write.partitionBy("date").insertInto("my_table")
>
>
>


-- 
Regards,
Rishitesh Mishra,
SnappyData . (http://www.snappydata.io/)

https://in.linkedin.com/in/rishiteshmishra

Reply via email to