I am not entirely sure if that was the intended configuration for the
scripts, but that is probably how it currently is since the most common
configuration involves the same SPARK_HOME on all machines.

TD


On Thu, Feb 13, 2014 at 1:53 PM, Guanhua Yan <gh...@lanl.gov> wrote:

> Thanks, TD. It seems that in order to use the magic start-*.sh to launch a
> cluster, all the nodes should have the same SPARK_HOME setting.
>
> Best regards,
> - Guanhua
>
> From: Tathagata Das <tathagata.das1...@gmail.com>
> Reply-To: <user@spark.incubator.apache.org>
> Date: Thu, 13 Feb 2014 13:12:21 -0800
> To: <user@spark.incubator.apache.org>
> Subject: Re: Cluster launch
>
> You could use sbin/start-slave.sh on the slave machine to launch the
> slave. That should use the local SPARK_HOME on the slave machine to launch
> the worker correctly.
>
> TD
>
>
> On Thu, Feb 13, 2014 at 1:09 PM, Guanhua Yan <gh...@lanl.gov> wrote:
>
>> Hi all:
>>
>> I was trying to run sbin/start-master.sh and sbin/start-slaves.sh for
>> launching a standalone cluster, which contains a linux workstation and a
>> mac desktop. On these two computers, the SPARK_HOME directories point to
>> different places. When running ./sbin/start-slaves.sh, I got an error
>> saying that on the slave machine, the spark directory doesn't exist. I
>> guess that in the start-slaves.sh script, the SPARK_HOME configuration on
>> the master machine was used when launching spark on the slave machine.
>>
>> Any clues about how to fix this?
>>
>> Thank you,
>> - Guanhua
>>
>>
>

Reply via email to