Hi, I'm trying to run Spark in Docker, using the amplab docker scripts (which I've been modifying to support 0.9.0)
I'm trying to use Docker's own link facility instead of the provided DNS service to have master-worker communication, using plain IP addresses. Right now, the master is working fine, but the workers are picking up the hostname when they build the remote actor address: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker@*devworker*:54621] Where 'devworker' is the name given to the docker container and non-routable from other containers. For the master, by setting `SPARK_MASTER_IP` on the `spark_env.sh`, it's working fine: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@*172.17.0.41*:7077] Yet, there's no SPARK_WORKER_IP option there. How could I instruct the spark worker to use a given ip address in a similar fashion? Thanks, Gerard.