I've been working with Spark 1.2 and Mesos 0.21.0 and while I have set the
spark.executor.uri within spark-env.sh (and directly within bash as well),
the Mesos slaves do not seem to be able to access the spark tgz file via
HTTP or HDFS as per the message below.


14/12/30 15:57:35 INFO SparkILoop: Created spark context..
Spark context available as sc.

scala> 14/12/30 15:57:38 INFO CoarseMesosSchedulerBackend: Mesos task 0 is
now TASK_FAILED
14/12/30 15:57:38 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now
TASK_FAILED
14/12/30 15:57:39 INFO CoarseMesosSchedulerBackend: Mesos task 2 is now
TASK_FAILED
14/12/30 15:57:41 INFO CoarseMesosSchedulerBackend: Mesos task 3 is now
TASK_FAILED
14/12/30 15:57:41 INFO CoarseMesosSchedulerBackend: Blacklisting Mesos
slave value: "20141228-183059-3045950474-5050-2788-S1"
 due to too many failures; is Spark installed on it?


I've verified that the Mesos slaves can access both the HTTP and HDFS
locations.  I'll start digging into the Mesos logs but was wondering if
anyone had run into this issue before.  I was able to get this to run
successfully on Spark 1.1 on GCP - my current environment that I'm
experimenting with is Digital Ocean - perhaps this is in play?

Thanks!
Denny

Reply via email to