Oh i fixed the problem.. I changed for another application the hosts files
to include localhost at 127.0.0.1 .. that seems to have destroyed
everything..

Thanks for the quick responses guys.

2012/6/9 shashwat shriparv <dwivedishash...@gmail.com>

> Please send the content of all the hosts file from all the machines. and
> master and slaves contents from all the machines from master and the slave
> machines.
>
> On Sun, Jun 10, 2012 at 1:39 AM, Joey Krabacher <jkrabac...@gmail.com
> >wrote:
>
> > Not sure, but I did notice that safe mode is still. I would investigate
> > that and see if the other nodes show up.
> >
> > /* Joey */
> > On Jun 9, 2012 2:52 PM, "Pierre Antoine DuBoDeNa" <pad...@gmail.com>
> > wrote:
> >
> > > Hello everyone..
> > >
> > > I have a cluster of 5 VMs, 1 as master/slave the rest are slaves. I run
> > > bin/start-all.sh everything seems ok i get no errors..
> > >
> > > I check with jps in all server they run:
> > >
> > > master:
> > > 22418 Jps
> > > 21497 NameNode
> > > 21886 SecondaryNameNode
> > > 21981 JobTracker
> > > 22175 TaskTracker
> > > 21688 DataNode
> > >
> > > slave:
> > > 3161 Jps
> > > 2953 DataNode
> > > 3105 TaskTracker
> > >
> > > But  in the web interface i get only 1 server connected.. is like the
> > > others are ignored.. Any clue why this can happen? where to look for
> > > errors?
> > >
> > > The hdfs web interface:
> > >
> > > Live Nodes<
> > > http://fusemaster.cs.columbia.edu:50070/dfsnodelist.jsp?whatNodes=LIVE
> >
> > > : 1 Dead Nodes<
> > > http://fusemaster.cs.columbia.edu:50070/dfsnodelist.jsp?whatNodes=DEAD
> >
> > > : 0
> > > it doesn't even show the rest slaves as dead..
> > >
> > > can it be a networking issue? (but i start all processes from master
> and
> > it
> > > starts all processes to all others).
> > >
> > > best,
> > > PA
> > >
> >
>
>
>
> --
>
>
> ∞
> Shashwat Shriparv
>

Reply via email to