Your earlier call stack clearly states that it fails because the Derby 
metastore has already been started by another instance, so I think that is 
explained by your attempt to run this concurrently.

Are you running Spark standalone? Do you have a cluster? You should be able to 
run spark in yarn-client mode against the hive metastore service. That should 
give you ability to run multiple concurrently. Be sure to copy hive-site.XML to 
SPARK_HOME/conf

--- Original Message ---

From: "Harika" <matha.har...@gmail.com>
Sent: February 12, 2015 8:22 PM
To: user@spark.apache.org
Subject: Re: HiveContext in SparkSQL - concurrency issues

Hi,

I've been reading about Spark SQL and people suggest that using HiveContext
is better. So can anyone please suggest a solution to the above problem.
This is stopping me from moving forward with HiveContext.

Thanks
Harika



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/HiveContext-in-SparkSQL-concurrency-issues-tp21491p21636.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to