I have a set of VMs and each VM instance has its own private IP and a
publicly accessible IP. When I start the master with default values, it
throws bind exception saying it cannot bind to the public IP. So I set the
SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I
achieve the same for worker nodes? If I run start-slaves.sh, I get the bind
exception. I can login to each slave and give -i option for spark-class
org.apache.spark.deploy.worker.Worker, but isn't there any other efficient
way to start all workers from the master node?

Reply via email to