You should have conf/slaves file on the master node set to master, node01,
node02..... so on and the masters file on master set to master. Also in the
/etc/hosts file get rid of 'node6' in the line 127.0.0.1
localhost.localdomain   localhost node6 on all your nodes. Ensure that the
/etc/hosts file contain the same information on all nodes. Also
hadoop-site.xml files on all nodes should have master:portno for hdfs and
tasktracker.
Once you do this restart hadoop.

On Fri, Apr 17, 2009 at 10:04 AM, jpe30 <jpotte...@gmail.com> wrote:

>
>
>
> Mithila Nagendra wrote:
> >
> > You have to make sure that you can ssh between the nodes. Also check the
> > file hosts in /etc folder. Both the master and the slave much have each
> > others machines defined in it. Refer to my previous mail
> > Mithila
> >
> >
>
>
> I have SSH setup correctly and here is the /etc/hosts file on node6 of the
> datanodes.
>
> #<ip-address>   <hostname.domain.org>   <hostname>
> 127.0.0.1               localhost.localdomain   localhost node6
> 192.168.1.10    master
> 192.168.1.1     node1
> 192.168.1.2     node2
> 192.168.1.3     node3
> 192.168.1.4     node4
> 192.168.1.5     node5
> 192.168.1.6     node6
>
> I have the slaves file on each machine set as node1 to node6, and each
> masters file set to master except for the master itself.  Still, I keep
> getting that same error in the datanodes...
> --
> View this message in context:
> http://www.nabble.com/Datanode-Setup-tp23064660p23101738.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>

Reply via email to