I think the problem is the use the loopback address:
export SPARK_LOCAL_IP=127.0.0.1
In the stack trace from the slave, you see this:
... Reason: Connection refused: localhost/127.0.0.1:51849
akka.actor.ActorNotFound: Actor not found for:
That's a very old page, try this instead:
http://spark.apache.org/docs/latest/running-on-mesos.html
When you run your Spark job on Mesos, tasks will be started on the slave
nodes as needed, since fine-grained mode is the default.
For a job like your example, very few tasks will be needed.
i have a mesos cluster, which i deploy spark to by using instructions on
http://spark.apache.org/docs/0.7.2/running-on-mesos.html
after that the spark shell starts up fine.
then i try the following on the shell:
val data = 1 to 1
val distData = sc.parallelize(data)
distData.filter(_
My bad there, I was using the correct link for docs. The spark shell runs
correctly, the framework is registered fine on mesos.
is there some setting i am missing:
this is my spark-env.sh
export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so
export