I think you should copy the namespaceID of your master which is in
name/current/VERSION file to all the slaves
Also use ./start-dfs.sh then ./start-mapred.sh to start respective daemons

http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-distributed-mode.html
<http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-distributed-mode.html>

*Regards*,
Rahul Patodi
Software Engineer,
Impetus Infotech (India) Pvt Ltd,
www.impetus.com
Mob:09907074413


On Wed, Feb 9, 2011 at 11:48 AM, madhu phatak <phatak....@gmail.com> wrote:

> Don't use start-all.sh ,use data node daemon script to start the data node
> .
>
> On Mon, Feb 7, 2011 at 11:52 PM, ahmednagy <ahmed_said_n...@hotmail.com
> >wrote:
>
> >
> > Dear All,
> > Please Help. I have tried to start the data nodes with ./start-all.sh on
> a
> > 7
> > node cluster however I recieve incompatible namespace when i try to put
> any
> > file on the HDFS I tried the suggestions in the known issues for changing
> > the VERSION number in the hdfs however it did not work. any ideas Please
> > advise. I am attaching the error in the log file for data node
> > Regards
> >
> >
> > https://issues.apache.org/jira/browse/HDFS-107
> >
> >
> > 2011-02-07 18:52:28,691 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting DataNode
> > STARTUP_MSG:   host = n01/192.168.0.1
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 0.21.0
> > STARTUP_MSG:   classpath =
> >
> >
> /home/ahmednagy/HadoopStandalone/hadoop-0.21.0/bin/../conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/ahmednagy/HadoopStandalone$
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r
> > 985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
> > ************************************************************/
> > 2011-02-07 18:52:28,881 WARN org.apache.hadoop.hdfs.server.common.Util:
> > Path
> > /tmp/mylocal/ should be specified as a URI in configuration files. Please
> > updat$
> > 2011-02-07 18:52:29,115 INFO org.apache.hadoop.security.Groups: Group
> > mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> > cacheTimeout=3000$
> > 2011-02-07 18:52:29,580 ERROR
> > org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> > Incompatible namespaceIDs in /tmp/mylocal: namenode name$
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:237)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:152)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:336)
> >        at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:260)
> >        at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:237)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1440)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1393)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1407)
> >        at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1552)
> >
> > --
> > View this message in context:
> > http://old.nabble.com/Data-Nodes-do-not-start-tp30866323p30866323.html
> > Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >
> >
>



-- 
*
*

Reply via email to