Not sure what did you aim to solve. When you mention Spark Master, I guess you
probably mean spark standalone mode? In that case spark cluster does not
necessary coupled with hadoop cluster. While if you aim to achieve better data
locality , then yes, run spark worker on HDFS data node might
Hi
Regarding your question
1) when I run the above script, which jar is beed submitted to the yarn server
?
What SPARK_JAR env point to and the --jar point to are both submitted to the
yarn server
2) It like the spark-assembly-0.8.1-incubating-hadoop2.0.5-alpha.jar plays the
role of
Not found in which part of code? If in sparkContext thread, say on AM,
--addJars should work
If on tasks, then --addjars won't work, you need to use --file=local://xxx etc,
not sure is it available in 0.8.1. And adding to a single jar should also work,
if not works, might be something wrong
Hi Izhar
Is that the exact command you are running? Say with 0.8.0 instead of
0.8.1 in the cmd?
Raymond Liu
From: Izhar ul Hassan [mailto:ezh...@gmail.com]
Sent: Friday, December 27, 2013 9:40 PM
To: user@spark.incubator.apache.org
Subject: Errors with spark-0.8.1 hadoop-yarn 2.2.0
Ido, when you say add external JARS, do you mean by -addJars which adding some
jar for SparkContext to use in the AM env?
If so, I think you don't need it for yarn-cilent mode at all, for yarn-client
mode, SparkContext running locally, I think you just need to make sure those
jars are in the
It's what it said on the document. For yarn-standalone mode, it will be the
host of where spark AM runs, while for yarn-client mode, it will be the local
host you run the cmd.
And what's cmd you run SparkPi ? I think you actually don't need to set
sprak.driver.host manually for Yarn mode ,
: AppMaster received a signal.
13/12/17 11:07:13 WARN yarn.ApplicationMaster: Failed to connect to driver at
null:null, retrying ...
After retry 'spark.yarn.applicationMaster.waitTries'(default 10), Job failed.
On Tue, Dec 17, 2013 at 12:07 PM, Liu, Raymond
raymond@intel.commailto:raymond
-distributed?
On Tue, Dec 17, 2013 at 1:03 PM, Liu, Raymond
raymond@intel.commailto:raymond@intel.com wrote:
Hmm, I don't see what mode you are trying to use? You specify the MASTER in
conf file?
I think in the run-on-yarn doc, the example for yarn standalone mode mentioned
that you
YARN Alpha API support is already there, If you mean Yarn stable API in hadoop
2.2, it probably will be in 0.8.1
Best Regards,
Raymond Liu
From: Pranay Tonpay [mailto:pranay.ton...@impetus.co.in]
Sent: Thursday, December 05, 2013 12:53 AM
To: user@spark.incubator.apache.org
Subject: Spark over
What version of code you are using?
2.2.0 support not yet merged into trunk. Check out
https://github.com/apache/incubator-spark/pull/199
Best Regards,
Raymond Liu
From: horia@gmail.com [mailto:horia@gmail.com] On Behalf Of Horia
Sent: Monday, December 02, 2013 3:00 PM
To:
Hi
I am encounter an issue that the executor actor could not connect to Driver
actor. But I could not figure out what's the reason.
Say the Driver actor is listening on :35838
root@sr434:~# netstat -lpv
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address
I am also working on porting the trunk code onto 2.2.0. Seems quite many API
changes but many of them are just a rename work.
While Yarn 2.1.0 beta also add some client API for easy interaction with YARN
framework, but there are not many examples on how to use them ( API and wiki
doc are both
Hi
I could run spark trunk code on top of yarn 2.0.5-alpha by
SPARK_JAR=./core/target/spark-core-assembly-0.8.0-SNAPSHOT.jar ./run
spark.deploy.yarn.Client \
--jar examples/target/scala-2.9.3/spark-examples_2.9.3-0.8.0-SNAPSHOT.jar \
--class spark.examples.SparkPi \
--args
13 matches
Mail list logo