I'm experimenting the same issue with spark 1.3.1
I verified that hadoop works (ie: running hadoop's pi example)
It seems like hadoop conf is in the classpath
(/opt/test/service/hadoop/etc/hadoop )
SPARK_PRINT_LAUNCH_COMMAND=1 ./bin/spark-shell --master yarn-client
Spark Command: /usr/lib/jvm/jr
Hi Akhil,
Its running fine when running through Namenode(RM) but fails while running
through Gateway, if I add hadoop-core jars to the hadoop
directory(/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/) it
works fine.
Its really strange that I am running the job through Spark-Submit an
Make sure your yarn service is running on 8032.
Thanks
Best Regards
On Tue, Apr 14, 2015 at 12:35 PM, Vineet Mishra
wrote:
> Hi Team,
>
> I am running Spark Word Count example(
> https://github.com/sryza/simplesparkapp), if I go with master as local it
> works fine.
>
> But when I change the ma
;user@spark.apache.org<mailto:user@spark.apache.org>"
mailto:user@spark.apache.org>>,
"cdh-u...@cloudera.org<mailto:cdh-u...@cloudera.org>"
mailto:cdh-u...@cloudera.org>>
Subject: Running Spark on Gateway - Connecting to Resource Manager Retries
Hi Team,
I
Hi Team,
I am running Spark Word Count example(
https://github.com/sryza/simplesparkapp), if I go with master as local it
works fine.
But when I change the master to yarn its end with retries connecting to
resource manager(stack trace mentioned below),
15/04/14 11:31:57 INFO RMProxy: Connecting