The context that is created by spark-shell is actually an instance of 
HiveContext. If you want to use it programmatically in your driver, you need to 
make sure that your context is a HiveContext, and not a SQLContext.

https://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables

Hope this helps,
Will

On June 13, 2015, at 3:36 PM, pth001 <patcharee.thong...@uni.no> wrote:

Hi,

I am using spark 0.14. I try to insert data into a hive table (in orc 
format) from DF.

partitionedTestDF.write.format("org.apache.spark.sql.hive.orc.DefaultSource")
.mode(org.apache.spark.sql.SaveMode.Append).partitionBy("zone","z","year","month").saveAsTable("testorc")

When this job is submitted by spark-submit I get >>
Exception in thread "main" java.lang.RuntimeException: Tables created 
with SQLContext must be TEMPORARY. Use a HiveContext instead

But the job works fine on spark-shell. What can be wrong?

BR,
Patcharee

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to