That didn't work. I listed private IPs of the worker nodes in conf/slaves
file on the master node and ran sbin/start-slaves.sh. But I still get the
same error on the worker nodes: Exception in thread "main"
org.jboss.netty.channel.ChannelException: Failed to bind to: slave1/<*Public
IP*>:0


On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <zhunanmcg...@gmail.com> wrote:

>  you can list your private IPs in conf/slaves file
>
> and start daemons with sbin/start-all.sh
>
> before that you would like to setup passwordless login from your master
> node to all of your worker nodes
>
> Best,
>
> --
> Nan Zhu
>
>
> On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:
>
> I have a set of VMs and each VM instance has its own private IP and a
> publicly accessible IP. When I start the master with default values, it
> throws bind exception saying it cannot bind to the public IP. So I set the
> SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I
> achieve the same for worker nodes? If I run start-slaves.sh, I get the
> bind exception. I can login to each slave and give -i option for
> spark-class org.apache.spark.deploy.worker.Worker, but isn't there any
> other efficient way to start all workers from the master node?
>
>
>

Reply via email to