Hi, 

which one is your default NIC depends on your default gateway setup

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:55 PM, David Thomas wrote:

> Thanks for your prompt reply.
> 
> ensure that your default NIC is the one binding with private IP
> Can you give me some pointers on how exactly I can do that.
> 
> 
> On Sat, Feb 15, 2014 at 1:49 PM, Nan Zhu <zhunanmcg...@gmail.com 
> (mailto:zhunanmcg...@gmail.com)> wrote:
> > Oh, sorry, I misunderstoodd your question
> > 
> > I thought you are asking how to start worker processes from master node
> > 
> > so you can actually remotely start process in master node, but receives 
> > exception in starting process? 
> > 
> > because Spark uses the IP address of your first NIC card to start process 
> > by default
> > 
> > so you can either ensure that your default NIC is the one binding with 
> > private IP or set SPARK_LOCAL_IP to your private address in worker nodes 
> > 
> > Best, 
> > 
> > -- 
> > Nan Zhu
> > 
> > 
> > On Saturday, February 15, 2014 at 3:27 PM, David Thomas wrote:
> > 
> > > That didn't work. I listed private IPs of the worker nodes in conf/slaves 
> > > file on the master node and ran sbin/start-slaves.sh 
> > > (http://start-slaves.sh). But I still get the same error on the worker 
> > > nodes: Exception in thread "main" 
> > > org.jboss.netty.channel.ChannelException: Failed to bind to: 
> > > slave1/<Public IP>:0  
> > > 
> > > 
> > > On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <zhunanmcg...@gmail.com 
> > > (mailto:zhunanmcg...@gmail.com)> wrote:
> > > > you can list your private IPs in conf/slaves file  
> > > > 
> > > > and start daemons with sbin/start-all.sh (http://start-all.sh) 
> > > > 
> > > > before that you would like to setup passwordless login from your master 
> > > > node to all of your worker nodes 
> > > > 
> > > > Best, 
> > > > 
> > > > -- 
> > > > Nan Zhu
> > > > 
> > > > 
> > > > 
> > > > On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:
> > > > 
> > > > > I have a set of VMs and each VM instance has its own private IP and a 
> > > > > publicly accessible IP. When I start the master with default values, 
> > > > > it throws bind exception saying it cannot bind to the public IP. So I 
> > > > > set the SPARK_MASTER_IP to the private IP and it starts up fine. Now 
> > > > > how do I achieve the same for worker nodes? If I run start-slaves.sh 
> > > > > (http://start-slaves.sh), I get the bind exception. I can login to 
> > > > > each slave and give -i option for spark-class 
> > > > > org.apache.spark.deploy.worker.Worker, but isn't there any other 
> > > > > efficient way to start all workers from the master node?
> > > > 
> > > 
> > 
> 

Reply via email to