a few of our nodes had (for inexplicable reasons) bound to 
localhost.localdomain for a while. definitely for map-reduce - this cases 
problems (not sure about hdfs). jobs were failing saying they could not find 
'localhost.localdomain' (i think this was in the reduce copy phase trying to 
contact map outputs). i am not terribly sure of the details - but there are 
issues with this ..


-----Original Message-----
From: Raghu Angadi [mailto:[EMAIL PROTECTED]
Sent: Wed 2/27/2008 10:36 AM
To: core-user@hadoop.apache.org
Subject: Re: Local testing and DHCP
 

It is doable. What was the exact config you used? What is the ip address 
of the DataNodes that shows up on namenode front page when it is running 
fine?

I think the trick is to make all the servers bind to localhost interface 
  (lo on Linux). For. e.g. all datanodes should have 127.0.0.x address.

Raghu.

Steve Sapovits wrote:
> 
> When running in Pseudo Distributed mode as outlined in the Quickstart, I 
> see that
> the DFS is, at some level, identified by the IP address it was created 
> under.   I''m
> doing this on a laptop and when I take it to another network, the 
> daemons come
> up okay but they can't find the DFS.  It looks like it's because the IP 
> is different
> from when the DFS was first created.  Is there a way around this so I 
> can run on
> the same box and see the same DFS regardless of what its IP is?
> 


Reply via email to