Sonal,

Can I ask why you're sleeping between starting hdfs and mapreduce? I've
never needed this in my own code. In general, Hadoop is pretty tolerant
about starting daemons "out of order."

If you need to wait for HDFS to be ready and come out of safe mode before
launching a job, that's another story, but you can accomplish that with:

$HADOOP_HOME/hadoop dfsadmin -safemode wait

... which will block until HDFS is ready for user commands in read/write
mode.
- Aaron


On Fri, Feb 12, 2010 at 8:44 AM, Sonal Goyal <sonalgoy...@gmail.com> wrote:

> Hi
>
> I had faced a similar issue on Ubuntu and Hadoop 0.20 and modified the
> start-all script to introduce a sleep time :
>
> bin=`dirname "$0"`
> bin=`cd "$bin"; pwd`
>
> . "$bin"/hadoop-config.sh
>
> # start dfs daemons
> "$bin"/start-dfs.sh --config $HADOOP_CONF_DIR
> *echo 'sleeping'
> sleep 60
> echo 'awake'*
> # start mapred daemons
> "$bin"/start-mapred.sh --config $HADOOP_CONF_DIR
>
>
> This seems to work. Please see if this works for you.
> Thanks and Regards,
> Sonal
>
>
> On Thu, Feb 11, 2010 at 3:56 AM, E. Sammer <e...@lifeless.net> wrote:
>
> > On 2/10/10 5:19 PM, Nick Klosterman wrote:
> >
> >> @E.Sammer, no I don't *think* that it is part of another cluster. The
> >> tutorial is for a single node cluster just as a initial set up to see if
> >> you can get things up and running. I have reformatted the namenode
> >> several times in my effort to get hadoop to work.
> >>
> >
> > What I mean is that the data node, at some point, connected to your name
> > node. If you reformat the name node, the data node must be wiped clean;
> it's
> > effectively trying to join a name node that no longer exists.
> >
> >
> > --
> > Eric Sammer
> > e...@lifeless.net
> > http://esammer.blogspot.com
> >
>

Reply via email to