I am trying to setup Apache-Spark on a small standalone cluster (1 Master
Node and 8 Slave Nodes). I have installed the "pre-built" version of spark
1.1.0 built on top of Hadoop 2.4. I have set up the passwordless ssh between
nodes and exported a few necessary environment variables. One of these
variables (which is probably most relevant) is:

export SPARK_LOCAL_DIRS=/scratch/spark/

I have a small piece of python code which I know works with Spark. I can run
it locally--on my desktop, not the cluster--with:

$SPARK_HOME/bin/spark-submit ~/My_code.py

I copied the code to the cluster. Then, I start all the processes from the
head node:

$SPARK_HOME/sbin/start-all

And each of the slaves is listed as running as process xxxxx.  If I then
attempt to run my code with the same command above:

$SPARK_HOME/bin/spark-submit ~/MY_code.py

I get the following error:

14/10/27 14:19:02 ERROR util.Utils: Failed to create local root dir in
/scratch/spark/.  Ignoring this directory.
14/10/27 14:19:02 ERROR storage.DiskBlockManager: Failed to create any local
dir.

I have the permissions set on the /scratch and /scratch/spark at 777. Any
help is greatly appreciated.

Also, as an aside, my degree is in Mathematics and I am now working as a
Postdoc in a CS department--so very explicit help is useful as I am somewhat
new to using clusters (and linux in general).  Thanks again.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Deploying-Spark-on-Stand-alone-cluster-tp17498.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to