Hi,
I am trying to Spark Jobserver(
https://github.com/spark-jobserver/spark-jobserver
https://github.com/spark-jobserver/spark-jobserver ) for running Spark
SQL jobs.
I was able to start the server but when I run my application(my Scala class
which extends SparkSqlJob), I am getting the
:
In YARN cluster mode, there is no Spark master, since YARN is your
resource manager. Yes you could force your AM somehow to run on the
same node as the RM, but why -- what do think is faster about that?
On Tue, Mar 10, 2015 at 10:06 AM, Harika matha.har...@gmail.com wrote:
Hi all,
I have
of the cluster.
1. Is this the correct way of configuration? What is the architecture of
Spark on YARN?
2. Is there a way in which I can run Spark master, YARN application master
and resource manager on a single node?(so that I can use three other nodes
for the computation)
Thanks
Harika
--
View
and then build Spark on it?
2. Is there a way to modify the existing Spark cluster to work with YARN?
Thanks in advance.
Harika
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Setting-up-Spark-with-YARN-on-EC2-cluster-tp21818.html
Sent from the Apache Spark
:04 AM, Harika matha.har...@gmail.com wrote:
Hi all,
I have been running a simple SQL program on Spark. To test the
concurrency,
I have created 10 threads inside the program, all threads using same
SQLContext object. When I ran the program on my EC2 cluster using
spark-submit, only 3 threads
concurrently?
Thanks
Harika
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-multiple-threads-with-same-Spark-Context-tp21784.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Hi Sreeharsha,
My data is in HDFS. I am trying to use Spark HiveContext (instead of
SQLContext) to fire queries on my data just because HiveContext supports
more operations.
Sreeharsha wrote
Change derby to mysql and check once me to faced the same issue
I am pretty new to Spark and
Hi,
I've been reading about Spark SQL and people suggest that using HiveContext
is better. So can anyone please suggest a solution to the above problem.
This is stopping me from moving forward with HiveContext.
Thanks
Harika
--
View this message in context:
http://apache-spark-user-list
Hi Aplysia,
Thanks for the reply.
Could you be more specific in terms of what part of the document to look at
as I have already seen it and tried a few of the relevant settings for no
use.
--
View this message in context: