It looks to me like you didn't install Hadoop consistently across all nodes.

xxx.xx.xx.251: bash:
> /home/utdhadoop1/Hadoop/

hadoop-0.18.3/bin/hadoop-daemon.sh: No such file or
directory

The above makes me suspect that xxx.xx.xx.251 has Hadoop installed
differently.  Can you try and locate hadoop-daemon.sh on xxx.xx.xx.251 and
adjust its location properly?

Alex

On Mon, May 25, 2009 at 10:25 PM, Pankil Doshi <forpan...@gmail.com> wrote:

> Hello,
>
> I tried adding "usern...@hostname" for eachentry in slaves file.
>
> My slave file have 2 data nodes.it looks like below
>
> localhost
> utdhado...@xxx.xx.xx.229
> utdhad...@xxx.xx.xx.251
>
>
> error what I get when i start dfs is as below:
>
> starting namenode, logging to
>
> /home/utdhadoop1/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-utdhadoop1-namenode-opencirrus-992.hpl.hp.com.out
> xxx.xx.xx.229: starting datanode, logging to
>
> /home/utdhadoop1/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-utdhadoop1-datanode-opencirrus-992.hpl.hp.com.out
> *xxx.xx.xx.251: bash: line 0: cd:
> /home/utdhadoop1/Hadoop/hadoop-0.18.3/bin/..: No such file or directory
> xxx.xx.xx.251: bash:
> /home/utdhadoop1/Hadoop/hadoop-0.18.3/bin/hadoop-daemon.sh: No such file or
> directory
> *localhost: datanode running as process 25814. Stop it first.
> xxx.xx.xx.229: starting secondarynamenode, logging to
>
> /home/utdhadoop1/Hadoop/hadoop-0.18.3/bin/../logs/hadoop-utdhadoop1-secondarynamenode-opencirrus-992.hpl.hp.com.out
> localhost: secondarynamenode running as process 25959. Stop it first.
>
>
>
> Basically it looks for "* /home/utdhadoop1/Hadoop/**
> hadoop-0.18.3/bin/hadoop-**daemon.sh"
> but instead it should look for "/home/utdhadoop/Hadoop/...." as
> xxx.xx.xx.251 has username as utdhadoop* .
>
> Any inputs??
>
> Thanks
> Pankil
>
> On Wed, May 20, 2009 at 6:30 PM, Todd Lipcon <t...@cloudera.com> wrote:
>
> > On Wed, May 20, 2009 at 4:14 PM, Alex Loddengaard <a...@cloudera.com>
> > wrote:
> >
> > > First of all, if you can get all machines to have the same user, that
> > would
> > > greatly simplify things.
> > >
> > > If, for whatever reason, you absolutely can't get the same user on all
> > > machines, then you could do either of the following:
> > >
> > > 1) Change the *-all.sh scripts to read from a slaves file that has two
> > > fields: a host and a user
> >
> >
> > To add to what Alex said, you should actually already be able to do this
> > with the existing scripts by simply using the format "usern...@hostname"
> > for
> > each entry in the slaves file.
> >
> > -Todd
> >
>

Reply via email to