Hi,

Thanks.  For the record, I found a solution by passing the ip address as
parameter:

spark-class org.apache.spark.deploy.worker.Worker $MASTER -i $1

-kr, Gerard.



On Tue, Feb 11, 2014 at 7:20 PM, Soumya Simanta <soumya.sima...@gmail.com>wrote:

> try setting worker/slaves ip on $SPARK_HOME/conf/slaves
>
>
> On Tue, Feb 11, 2014 at 1:14 PM, Gerard Maas <gerard.m...@gmail.com>wrote:
>
>> Hi,
>>
>> I'm trying to run Spark in Docker, using the amplab docker scripts (which
>> I've been modifying  to support 0.9.0)
>>
>> I'm trying to use Docker's own link facility instead of the provided DNS
>> service to have master-worker communication, using plain IP addresses.
>>
>> Right now, the master is working fine, but the workers are picking up the
>> hostname when they build the remote actor address:
>>
>> INFO Remoting: Remoting started; listening on addresses
>> :[akka.tcp://sparkWorker@*devworker*:54621]
>>
>> Where 'devworker' is the name given to  the docker container and
>> non-routable from other containers.
>>
>> For the master, by setting  `SPARK_MASTER_IP` on the `spark_env.sh`, it's
>> working fine:
>> INFO Remoting: Remoting started; listening on addresses
>> :[akka.tcp://sparkMaster@*172.17.0.41*:7077]
>>
>> Yet, there's no SPARK_WORKER_IP option there.
>>
>> How could I instruct the spark worker to use a given ip address in a
>> similar fashion?
>>
>> Thanks,
>>
>> Gerard.
>>
>
>

Reply via email to