Well that are only the logs of the slaves on mesos level, I'm not sure from
your reply if you can ssh into a specific slave or not, if you can, you
should look at actual output of the application (spark in this case) on a
slave in e.g.
Thanks hbogert. There it is plain as day; it can't find my spark binaries.
I thought it was enough to set SPARK_EXECUTOR_URI in my spark-env.sh since
this is all that's necessary to run spark-shell.sh against a mesos master,
but I also had to set spark.executor.uri in my spark-defaults.conf (or
Thanks for the response. I'll admit I'm rather new to Mesos. Due to the
nature of my setup I can't use the Mesos web portal effectively because I'm
not connected by VPN, so the local network links from the mesos-master
dashboard I SSH tunnelled aren't working.
Anyway, I was able to dig up some
I left a comment on your stackoverflow earlier. Can you share what's the output
in your stderr log from your Mesos task? It
Can be found in your Mesos UI and going to its sandbox.
Tim
Sent from my iPhone
On Mar 29, 2015, at 12:14 PM, seglo wla...@gmail.com wrote:
The latter part of this
Mesosphere did a great job on simplifying the process of running Spark on
Mesos. I am using this guide to setup a development Mesos cluster on Google
Cloud Compute.
https://mesosphere.com/docs/tutorials/run-spark-on-mesos/
I can run the example that's in the guide by using spark-shell (finding
The latter part of this question where I try to submit the application by
referring to it on HDFS is very similar to the recent question
Spark-submit not working when application jar is in hdfs
Hi,
What do the mesos slave logs say? Usually this gives a clearcut error, they
are probably local on a slave node.
I'm not sure about your config, so I can;t pinpoint you to a specific path.
might look something like: