Hi I have a json file that can be load by sqlcontext.jsonfile into a table. but this table is not partitioned.
Then I wish to transform this table into a partitioned table say on field “date” etc. what will be the best approaching to do this? seems in hive this is usually done by load data into a dedicated partition directly. but if I don’t want to select data out by a specific partition then insert it with each partition field value. How should I do it in a quick way? And how to do it in Spark sql? raymond --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org