spark-defaults.conf
spark.executor.uri        
hdfs://????:9000/user/????/spark-1.1.0-bin-hadoop2.4.tgz



From: Bijoy Deb [mailto:bijoy.comput...@gmail.com]
Sent: den 10 oktober 2014 11:59
To: user@spark.apache.org
Subject: Spark on Mesos Issue - Do I need to install Spark on Mesos slaves

Hi,
I am trying to submit a Spark job on Mesos using spark-submit from my 
Mesos-Master machine.
My SPARK_HOME = /vol1/spark/spark-1.0.2-bin-hadoop2

I have uploaded the spark-1.0.2-bin-hadoop2.tgz to hdfs so that the mesos 
slaves can download it to invoke the Mesos Spark backend executor.


But on submitting the job, I can see the below error in 'stderr' logs on the 
Mesos slave machine:


sh: /vol1/spark/spark-1.0.2-bin-hadoop2/sbin/spark-executor: No such file or 
directory
Based on documentation,I understand that if I keep the spark-mesos binary file 
in hdfs,I dont need to install Spark separately on the slave nodes.So, the 
SPARK_HOME or /vol1/spark/spark-1.0.2-bin-hadoop2/ path is non-existent on any 
of my slave machines and hence the error.
Now, my questions is:
Shouldn't the mesos-slave be looking for the spark-executor command in the 
temporary directory where it is supposed to extract the 
spark-1.0.2-bin-hadoop2.tgz from hdfs,instead of the SPARK_HOME directory?What 
am I doing wrong here?
Any help would be really appreciated.

Thanks,
Bijoy

Reply via email to