Turns out, it does cause problems later on.
I think the problem is that the slaves have, in their hosts files:
127.0.0.1 localhost.localdomain localhost
127.0.0.1 machinename.cse.sc.edu machinename
The reduce phase fails because the reducer cannot get data from the
mappers as it tries to open a
Thanks! that worked. I was able to run dfs and put some files in it.
However, when I go to my namenode at http://namenode:50070 I see that
all the datanodes have a name of localhost.
Will this cause bigger problems later on? or should I just ignore it.
Jose
On Tue, Jul 22, 2008 at 6:48 PM,
That's good. :)
Will this cause bigger problems later on? or should I just ignore it.
I'm not sure, But I guess there is no problem.
Does anyone have some experience with that?
Regards, Edward J. Yoon
On Wed, Jul 23, 2008 at 11:05 PM, Jose Vidal [EMAIL PROTECTED] wrote:
Thanks! that worked.
I'm trying to install hadoop on our linux machine but after
start-all.sh none of the slaves can connect:
2008-07-22 16:35:27,534 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
/
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host =
In the first instance make sure that all the relevant ports are actually
open. I would also check that your conf files are ok. Looking at the
example below, it seems that /work has a permissions problem.
(Note that telnet has nothing to do with Hadoop as far as I'm aware --a
better test would
If you have a static address for the machine, make sure that your
hosts file is pointing to the static address for the namenode host
name as opposed to the 127.0.0.1 address. It should look something
like this with the values replaced with your values.
127.0.0.1