Thanks TD. Yeah, I understand that. The reason for doing so is that we have
a lot of desktops, which do not form a commodity cluster but can be used
together to accomplish a common computational task.

Following your suggestion yesterday, I again ran into trouble. On the slave
machine, it got the following exception:

java.io.IOException: Cannot run program
"/home1/ghyan/Software/spark-0.9.0-incubating-bin-hadoop2/bin/compute-classp
ath.sh" (in directory "."): error=2, No such file or directory

The path "/home1/ghyan/Software/spark-0.9.0-incubating-bin-hadoop2" actually
exists on the master machine. Obviously it is passed to the slave machine.

Best regards,
- Guanhua



From:  Tathagata Das <tathagata.das1...@gmail.com>
Reply-To:  <user@spark.incubator.apache.org>
Date:  Thu, 13 Feb 2014 17:39:53 -0800
To:  <user@spark.incubator.apache.org>
Subject:  Re: Cluster launch

I am not entirely sure if that was the intended configuration for the
scripts, but that is probably how it currently is since the most common
configuration involves the same SPARK_HOME on all machines.

TD


On Thu, Feb 13, 2014 at 1:53 PM, Guanhua Yan <gh...@lanl.gov> wrote:
> Thanks, TD. It seems that in order to use the magic start-*.sh to launch a
> cluster, all the nodes should have the same SPARK_HOME setting.
> 
> Best regards,
> - Guanhua
> 
> From:  Tathagata Das <tathagata.das1...@gmail.com>
> Reply-To:  <user@spark.incubator.apache.org>
> Date:  Thu, 13 Feb 2014 13:12:21 -0800
> To:  <user@spark.incubator.apache.org>
> Subject:  Re: Cluster launch
> 
> You could use sbin/start-slave.sh on the slave machine to launch the slave.
> That should use the local SPARK_HOME on the slave machine to launch the worker
> correctly.
> 
> TD
> 
> 
> On Thu, Feb 13, 2014 at 1:09 PM, Guanhua Yan <gh...@lanl.gov> wrote:
>> Hi all:
>> 
>> I was trying to run sbin/start-master.sh and sbin/start-slaves.sh for
>> launching a standalone cluster, which contains a linux workstation and a mac
>> desktop. On these two computers, the SPARK_HOME directories point to
>> different places. When running ./sbin/start-slaves.sh, I got an error saying
>> that on the slave machine, the spark directory doesn't exist. I guess that in
>> the start-slaves.sh script, the SPARK_HOME configuration on the master
>> machine was used when launching spark on the slave machine.
>> 
>> Any clues about how to fix this?
>> 
>> Thank you,
>> - Guanhua
>> 
> 



Reply via email to