We can set a path, refer to the unit tests. For example:
df.saveAsTable(savedJsonTable, org.apache.spark.sql.json, append, path
=tmpPath)
https://github.com/apache/spark/blob/master/python/pyspark/sql/tests.py
Investigating some more, I found that the table is being created at the
specified
Hi,
The behaviour is the same for me in Scala and Python, so posting here in
Python. When I use DataFrame.saveAsTable with the path option, I expect an
external Hive table to be created at the specified path. Specifically, when
I call:
df.saveAsTable(..., path=/tmp/test)
I expect an external
Another follow-up: saveAsTable works as expected when running on hadoop
cluster with Hive installed. It's just locally that I'm getting this
strange behaviour. Any ideas why this is happening?
Kind Regards.
Tom
On 27 March 2015 at 11:29, Tom Walwyn twal...@gmail.com wrote:
We can set a path,