I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?
- Standalone cluster setup: binding to private IP David Thomas
- Re: Standalone cluster setup: binding to private IP Nan Zhu
- Re: Standalone cluster setup: binding to private ... David Thomas
- Re: Standalone cluster setup: binding to priv... Nan Zhu
- Re: Standalone cluster setup: binding to ... David Thomas
- Re: Standalone cluster setup: bindin... Nan Zhu
- Re: Standalone cluster setup: bi... Izhar ul Hassan
- Re: Standalone cluster setup... David Thomas
- Re: Standalone cluster setup... Nan Zhu
- Re: Standalone cluster setup... David Thomas
- Re: Standalone cluster setup... David Thomas