just go ahead and add them with the --files
methods to simplify things and avoid having them added for all applications.
Thanks
On Mon, Jan 13, 2014 at 3:01 PM, Tom Graves tgraves...@yahoo.com wrote:
I'm assuming you actually installed the jar on all the yarn clusters then?
In general
I'm assuming you actually installed the jar on all the yarn clusters then?
In general this isn't a good idea on yarn as most users don't have permissions
to install things on the nodes themselves. The idea is Yarn provides a certain
set of jars which really should be just the yarn/hadoop
The hadoop conf dir is what controls which YARN cluster it goes to so its a
matter of putting in the correct configs for the cluster you want it to go to.
You have to execute the org.apache.spark.deploy.yarn.Client or your application
will not run on yarn in standalone mode. The client is
at 11:50 AM, Tom Graves tgraves...@yahoo.com wrote:
Sorry for the delay. What is the default filesystem on your HDFS setup? It
looks like its set to file: rather then hdfs://. That is the only reason I can
think its listing the directory as
file:/home/work/.sparkStaging
Hey Jiacheng Guo,
do you have SPARK_EXAMPLES_JAR env variable set? If you do, you have to add
the --addJars parameter to the yarn client and point to the spark examples jar.
Or just unset SPARK_EXAMPLES_JAR env variable.
You should only have to set SPARK_JAR env variable.
If that isn't
Hey Bill,
Currently the Spark on Yarn only supports batch mode where you submit your job
via the yarn Client. Note that this will hook the spark UI up to the Yarn
ResourceManager web UI. Is there something more you were looking for then just
finding the spark web ui for various jobs?
There
? A quick scan of the Spark Streaming documentation makes
no mention of Yarn, but I thought that this should be possible.
Thanks,
Philip
On 11/15/2013 7:15 AM, Tom Graves wrote:
Hey Bill,
Currently the Spark on Yarn only supports batch mode where you submit your job
via the yarn Client
15, 2013, at 12:51 PM, Tom Graves tgraves...@yahoo.com wrote:
Shark is not currently supported on yarn. There are 2 ways this could be done
that come to mind. One would be to run shark as the application itself that
gets started on the application master in the current yarn-standalone mode
());
Do i do something here? or will the client pick up the yarn configurations from
the hadoop config?
Vipul
On Fri, Sep 6, 2013 at 4:30 PM, Tom Graves tgraves...@yahoo.com wrote:
Which spark branch are you building off of?
If using master branch follow the directions here:
https