I often find myself editing the src/saveVersion.sh to fake out the version
numbers, when I build a hadoop jar for the first time, and have to deploy it
on an an already running cluster.


On Mon, Jun 15, 2009 at 11:57 PM, Ian jonhson <jonhson....@gmail.com> wrote:

> If you rebuilt the hadoop, following the wikipage of HowToRelease may
> reduce the trouble occurred.
>
>
> On Sat, May 16, 2009 at 7:20 AM, Pankil Doshi<forpan...@gmail.com> wrote:
> > I got the solution..
> >
> > Namespace IDs where some how incompatible.So I had to clean data dir and
> > temp dir ,format the cluster and make a fresh start
> >
> > Pankil
> >
> > On Fri, May 15, 2009 at 2:25 AM, jason hadoop <jason.had...@gmail.com
> >wrote:
> >
> >> There should be a few more lines at the end.
> >> We only want the part from last the STARTUP_MSG to the end
> >>
> >> On one of mine a successfull start looks like this:
> >> STARTUP_MSG: Starting DataNode
> >> STARTUP_MSG:   host = at/192.168.1.119
> >> STARTUP_MSG:   args = []
> >> STARTUP_MSG:   version = 0.19.1-dev
> >> STARTUP_MSG:   build =  -r ; compiled by 'jason' on Tue Mar 17 04:03:57
> PDT
> >> 2009
> >> ************************************************************/
> >> 2009-03-17 03:08:11,884 INFO
> >> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> >> FSDatasetStatusMBean
> >> 2009-03-17 03:08:11,886 INFO
> >> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
> >> 50010
> >> 2009-03-17 03:08:11,889 INFO
> >> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> >> 1048576 bytes/s
> >> 2009-03-17 03:08:12,142 INFO org.mortbay.http.HttpServer: Version
> >> Jetty/5.1.4
> >> 2009-03-17 03:08:12,155 INFO org.mortbay.util.Credential: Checking
> Resource
> >> aliases
> >> 2009-03-17 03:08:12,518 INFO org.mortbay.util.Container: Started
> >> org.mortbay.jetty.servlet.webapplicationhand...@1e184cb
> >> 2009-03-17 03:08:12,578 INFO org.mortbay.util.Container: Started
> >> WebApplicationContext[/static,/static]
> >> 2009-03-17 03:08:12,721 INFO org.mortbay.util.Container: Started
> >> org.mortbay.jetty.servlet.webapplicationhand...@1d9e282
> >> 2009-03-17 03:08:12,722 INFO org.mortbay.util.Container: Started
> >> WebApplicationContext[/logs,/logs]
> >> 2009-03-17 03:08:12,878 INFO org.mortbay.util.Container: Started
> >> org.mortbay.jetty.servlet.webapplicationhand...@14a75bb
> >> 2009-03-17 03:08:12,884 INFO org.mortbay.util.Container: Started
> >> WebApplicationContext[/,/]
> >> 2009-03-17 03:08:12,951 INFO org.mortbay.http.SocketListener: Started
> >> SocketListener on 0.0.0.0:50075
> >> 2009-03-17 03:08:12,951 INFO org.mortbay.util.Container: Started
> >> org.mortbay.jetty.ser...@1358f03
> >> 2009-03-17 03:08:12,957 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> >> Initializing JVM Metrics with processName=DataNode, sessionId=null
> >> 2009-03-17 03:08:13,242 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> >> Initializing RPC Metrics with hostName=DataNode, port=50020
> >> 2009-03-17 03:08:13,264 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> Responder: starting
> >> 2009-03-17 03:08:13,304 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> listener on 50020: starting
> >> 2009-03-17 03:08:13,343 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> handler 0 on 50020: starting
> >> 2009-03-17 03:08:13,343 INFO
> >> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
> >> DatanodeRegistration(192.168.1.119:50010,
> >> storageID=DS-540597485-192.168.1.119-50010-1237022386925,
> infoPort=50075,
> >> ipcPort=50020)
> >> 2009-03-17 03:08:13,344 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> handler 1 on 50020: starting
> >> 2009-03-17 03:08:13,344 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> handler 2 on 50020: starting
> >> 2009-03-17 03:08:13,351 INFO
> >> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> >> 192.168.1.119:50010,
> >> storageID=DS-540597485-192.168.1.119-50010-1237022386925,
> infoPort=50075,
> >> ipcPort=50020)In DataNode.run, data =
> >> FSDataset{dirpath='/tmp/hadoop-0.19.0-jason/dfs/data/current'}
> >> 2009-03-17 03:08:13,352 INFO
> >> org.apache.hadoop.hdfs.server.datanode.DataNode: using
> BLOCKREPORT_INTERVAL
> >> of 3600000msec Initial delay: 0msec
> >> 2009-03-17 03:08:13,391 INFO
> >> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 14
> blocks
> >> got processed in 27 msecs
> >> 2009-03-17 03:08:13,392 INFO
> >> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
> >> scanner.
> >>
> >>
> >>
> >> On Thu, May 14, 2009 at 9:51 PM, Pankil Doshi <forpan...@gmail.com>
> wrote:
> >>
> >> > This is log from datanode.
> >> >
> >> >
> >> > 2009-05-14 00:36:14,559 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 12 msecs
> >> > 2009-05-14 01:36:15,768 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 8 msecs
> >> > 2009-05-14 02:36:13,975 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 9 msecs
> >> > 2009-05-14 03:36:15,189 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 12 msecs
> >> > 2009-05-14 04:36:13,384 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 9 msecs
> >> > 2009-05-14 05:36:14,592 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 9 msecs
> >> > 2009-05-14 06:36:15,806 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 12 msecs
> >> > 2009-05-14 07:36:14,008 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 12 msecs
> >> > 2009-05-14 08:36:15,204 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 9 msecs
> >> > 2009-05-14 09:36:13,430 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 12 msecs
> >> > 2009-05-14 10:36:14,642 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 12 msecs
> >> > 2009-05-14 11:36:15,850 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 9 msecs
> >> > 2009-05-14 12:36:14,193 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 11 msecs
> >> > 2009-05-14 13:36:15,454 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 12 msecs
> >> > 2009-05-14 14:36:13,662 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 9 msecs
> >> > 2009-05-14 15:36:14,930 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 13 msecs
> >> > 2009-05-14 16:36:16,151 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 12 msecs
> >> > 2009-05-14 17:36:14,407 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 9 msecs
> >> > 2009-05-14 18:36:15,659 INFO org.apache.hadoop.dfs.DataNode:
> BlockReport
> >> of
> >> > 82 blocks got processed in 10 msecs
> >> > 2009-05-14 19:27:02,188 WARN org.apache.hadoop.dfs.DataNode:
> >> > java.io.IOException: Call to
> >> > hadoopmaster.utdallas.edu/10.110.95.61:9000failed on local except$
> >> >        at org.apache.hadoop.ipc.Client.wrapException(Client.java:751)
> >> >        at org.apache.hadoop.ipc.Client.call(Client.java:719)
> >> >        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
> >> >        at org.apache.hadoop.dfs.$Proxy4.sendHeartbeat(Unknown Source)
> >> >        at
> org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:690)
> >> >        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:2967)
> >> >        at java.lang.Thread.run(Thread.java:619)
> >> > Caused by: java.io.EOFException
> >> >        at java.io.DataInputStream.readInt(DataInputStream.java:375)
> >> >        at
> >> >
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:500)
> >> >        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:442)
> >> >
> >> > 2009-05-14 19:27:06,198 INFO org.apache.hadoop.ipc.Client: Retrying
> >> connect
> >> > to server: hadoopmaster.utdallas.edu/10.110.95.61:9000. Already tried
> 0
> >> > time(s).
> >> > 2009-05-14 19:27:06,436 INFO org.apache.hadoop.dfs.DataNode:
> >> SHUTDOWN_MSG:
> >> > /************************************************************
> >> > SHUTDOWN_MSG: Shutting down DataNode at Slave1/127.0.1.1
> >> > ************************************************************/
> >> > 2009-05-14 19:27:21,737 INFO org.apache.hadoop.dfs.DataNode:
> STARTUP_MSG:
> >> > /************************************************************
> >> > STARTUP_MSG: Starting DataNode
> >> > STARTUP_MSG:   host = Slave1/127.0.1.1
> >> >
> >> >
> >> > On Thu, May 14, 2009 at 11:43 PM, jason hadoop <
> jason.had...@gmail.com
> >> > >wrote:
> >> >
> >> > > The data node logs are on the datanode machines in the log
> directory.
> >> > > You may wish to buy my book and read chapter 4 on hdfs management.
> >> > >
> >> > > On Thu, May 14, 2009 at 9:39 PM, Pankil Doshi <forpan...@gmail.com>
> >> > wrote:
> >> > >
> >> > > > Can u guide me where can I find datanode log files? As I cannot
> find
> >> it
> >> > > in
> >> > > > $hadoop/logs and so.
> >> > > >
> >> > > > I can only find  following files in logs folder :-
> >> > > >
> >> > > > hadoop-hadoop-namenode-hadoopmaster.log
> >> > > >    hadoop-hadoop-namenode-hadoopmaster.out
> >> > > >     hadoop-hadoop-namenode-hadoopmaster.out.1
> >> > > >   hadoop-hadoop-secondarynamenode-hadoopmaster.log
> >> > > >     hadoop-hadoop-secondarynamenode-hadoopmaster.out
> >> > > >     hadoop-hadoop-secondarynamenode-hadoopmaster.out.1
> >> > > >    history
> >> > > >
> >> > > >
> >> > > > Thanks
> >> > > > Pankil
> >> > > >
> >> > > > On Thu, May 14, 2009 at 11:27 PM, jason hadoop <
> >> jason.had...@gmail.com
> >> > > > >wrote:
> >> > > >
> >> > > > > You have to examine the datanode log files
> >> > > > > the namenode does not start the datanodes, the start script
> does.
> >> > > > > The name node passively waits for the datanodes to connect to
> it.
> >> > > > >
> >> > > > > On Thu, May 14, 2009 at 6:43 PM, Pankil Doshi <
> forpan...@gmail.com
> >> >
> >> > > > wrote:
> >> > > > >
> >> > > > > > Hello Everyone,
> >> > > > > >
> >> > > > > > Actually I had a cluster which was up.
> >> > > > > >
> >> > > > > > But i stopped the cluster as i  wanted to format it.But cant
> >> start
> >> > it
> >> > > > > back.
> >> > > > > >
> >> > > > > > 1)when i give "start-dfs.sh" I get following on screen
> >> > > > > >
> >> > > > > > starting namenode, logging to
> >> > > > > >
> >> > > >
> >> >
> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-namenode-hadoopmaster.out
> >> > > > > > slave1.local: starting datanode, logging to
> >> > > > > >
> >> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave1.out
> >> > > > > > slave3.local: starting datanode, logging to
> >> > > > > >
> >> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave3.out
> >> > > > > > slave4.local: starting datanode, logging to
> >> > > > > >
> >> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave4.out
> >> > > > > > slave2.local: starting datanode, logging to
> >> > > > > >
> >> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave2.out
> >> > > > > > slave5.local: starting datanode, logging to
> >> > > > > >
> >> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave5.out
> >> > > > > > slave6.local: starting datanode, logging to
> >> > > > > >
> >> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave6.out
> >> > > > > > slave9.local: starting datanode, logging to
> >> > > > > >
> >> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave9.out
> >> > > > > > slave8.local: starting datanode, logging to
> >> > > > > >
> >> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave8.out
> >> > > > > > slave7.local: starting datanode, logging to
> >> > > > > >
> >> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave7.out
> >> > > > > > slave10.local: starting datanode, logging to
> >> > > > > >
> >> > /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-datanode-Slave10.out
> >> > > > > > hadoopmaster.local: starting secondarynamenode, logging to
> >> > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> /Hadoop/hadoop-0.18.3/bin/../logs/hadoop-hadoop-secondarynamenode-hadoopmaster.out
> >> > > > > >
> >> > > > > >
> >> > > > > > 2) from log file named
> "hadoop-hadoop-namenode-hadoopmaster.log"
> >> I
> >> > > get
> >> > > > > > following
> >> > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > > 2009-05-14 20:28:23,515 INFO org.apache.hadoop.dfs.NameNode:
> >> > > > STARTUP_MSG:
> >> > > > > > /************************************************************
> >> > > > > > STARTUP_MSG: Starting NameNode
> >> > > > > > STARTUP_MSG:   host = hadoopmaster/127.0.0.1
> >> > > > > > STARTUP_MSG:   args = []
> >> > > > > > STARTUP_MSG:   version = 0.18.3
> >> > > > > > STARTUP_MSG:   build =
> >> > > > > >
> >> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18-r
> >> > > > > > 736250;
> >> > > > > > compiled by 'ndaley' on Thu Jan 22 23:12:08 UTC 2009
> >> > > > > > ************************************************************/
> >> > > > > > 2009-05-14 20:28:23,717 INFO
> >> > > org.apache.hadoop.ipc.metrics.RpcMetrics:
> >> > > > > > Initializing RPC Metrics with hostName=NameNode, port=9000
> >> > > > > > 2009-05-14 20:28:23,728 INFO org.apache.hadoop.dfs.NameNode:
> >> > Namenode
> >> > > > up
> >> > > > > > at:
> >> > > > > > hadoopmaster.local/192.168.0.1:9000
> >> > > > > > 2009-05-14 20:28:23,733 INFO
> >> > > org.apache.hadoop.metrics.jvm.JvmMetrics:
> >> > > > > > Initializing JVM Metrics with processName=NameNode,
> >> sessionId=null
> >> > > > > > 2009-05-14 20:28:23,743 INFO
> >> org.apache.hadoop.dfs.NameNodeMetrics:
> >> > > > > > Initializing NameNodeMeterics using context
> >> > > > > > object:org.apache.hadoop.metrics.spi.NullContext
> >> > > > > > 2009-05-14 20:28:23,856 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> fsOwner=hadoop,hadoop,adm,dialout,fax,cdrom,floppy,tape,audio,dip,video,plugdev,fuse,lpadmin,admin,sambashare
> >> > > > > > 2009-05-14 20:28:23,856 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > > > > supergroup=supergroup
> >> > > > > > 2009-05-14 20:28:23,856 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > > > > isPermissionEnabled=true
> >> > > > > > 2009-05-14 20:28:23,883 INFO
> >> > > org.apache.hadoop.dfs.FSNamesystemMetrics:
> >> > > > > > Initializing FSNamesystemMeterics using context
> >> > > > > > object:org.apache.hadoop.metrics.spi.NullContext
> >> > > > > > 2009-05-14 20:28:23,885 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > > > Registered
> >> > > > > > FSNamesystemStatusMBean
> >> > > > > > 2009-05-14 20:28:23,964 INFO org.apache.hadoop.dfs.Storage:
> >> Number
> >> > of
> >> > > > > files
> >> > > > > > = 1
> >> > > > > > 2009-05-14 20:28:23,971 INFO org.apache.hadoop.dfs.Storage:
> >> Number
> >> > of
> >> > > > > files
> >> > > > > > under construction = 0
> >> > > > > > 2009-05-14 20:28:23,971 INFO org.apache.hadoop.dfs.Storage:
> Image
> >> > > file
> >> > > > of
> >> > > > > > size 80 loaded in 0 seconds.
> >> > > > > > 2009-05-14 20:28:23,972 INFO org.apache.hadoop.dfs.Storage:
> Edits
> >> > > file
> >> > > > > > edits
> >> > > > > > of size 4 edits # 0 loaded in 0 seconds.
> >> > > > > > 2009-05-14 20:28:23,974 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > > Finished
> >> > > > > > loading FSImage in 155 msecs
> >> > > > > > 2009-05-14 20:28:23,976 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > Total
> >> > > > > > number
> >> > > > > > of blocks = 0
> >> > > > > > 2009-05-14 20:28:23,988 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > Number
> >> > > > of
> >> > > > > > invalid blocks = 0
> >> > > > > > 2009-05-14 20:28:23,988 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > Number
> >> > > > of
> >> > > > > > under-replicated blocks = 0
> >> > > > > > 2009-05-14 20:28:23,988 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > Number
> >> > > > of
> >> > > > > > over-replicated blocks = 0
> >> > > > > > 2009-05-14 20:28:23,988 INFO
> org.apache.hadoop.dfs.StateChange:
> >> > > STATE*
> >> > > > > > Leaving safe mode after 0 secs.
> >> > > > > > *2009-05-14 20:28:23,989 INFO
> org.apache.hadoop.dfs.StateChange:
> >> > > STATE*
> >> > > > > > Network topology has 0 racks and 0 datanodes*
> >> > > > > > 2009-05-14 20:28:23,989 INFO
> org.apache.hadoop.dfs.StateChange:
> >> > > STATE*
> >> > > > > > UnderReplicatedBlocks has 0 blocks
> >> > > > > > 2009-05-14 20:28:29,128 INFO org.mortbay.util.Credential:
> >> Checking
> >> > > > > Resource
> >> > > > > > aliases
> >> > > > > > 2009-05-14 20:28:29,243 INFO org.mortbay.http.HttpServer:
> Version
> >> > > > > > Jetty/5.1.4
> >> > > > > > 2009-05-14 20:28:29,244 INFO org.mortbay.util.Container:
> Started
> >> > > > > > HttpContext[/static,/static]
> >> > > > > > 2009-05-14 20:28:29,245 INFO org.mortbay.util.Container:
> Started
> >> > > > > > HttpContext[/logs,/logs]
> >> > > > > > 2009-05-14 20:28:29,750 INFO org.mortbay.util.Container:
> Started
> >> > > > > > org.mortbay.jetty.servlet.webapplicationhand...@7fcebc9f
> >> > > > > > 2009-05-14 20:28:29,838 INFO org.mortbay.util.Container:
> Started
> >> > > > > > WebApplicationContext[/,/]
> >> > > > > > 2009-05-14 20:28:29,843 INFO org.mortbay.http.SocketListener:
> >> > Started
> >> > > > > > SocketListener on 0.0.0.0:50070
> >> > > > > > 2009-05-14 20:28:29,843 INFO org.mortbay.util.Container:
> Started
> >> > > > > > org.mortbay.jetty.ser...@61acfa31
> >> > > > > > 2009-05-14 20:28:29,843 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > > > Web-server
> >> > > > > > up at: 0.0.0.0:50070
> >> > > > > > 2009-05-14 20:28:29,843 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > Responder: starting
> >> > > > > > 2009-05-14 20:28:29,844 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > listener on 9000: starting
> >> > > > > > 2009-05-14 20:28:29,865 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > handler 0 on 9000: starting
> >> > > > > > 2009-05-14 20:28:29,876 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > handler 1 on 9000: starting
> >> > > > > > 2009-05-14 20:28:29,877 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > handler 2 on 9000: starting
> >> > > > > > 2009-05-14 20:28:29,877 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > handler 3 on 9000: starting
> >> > > > > > 2009-05-14 20:28:29,878 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > handler 4 on 9000: starting
> >> > > > > > 2009-05-14 20:28:29,879 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > handler 5 on 9000: starting
> >> > > > > > 2009-05-14 20:28:29,879 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > handler 6 on 9000: starting
> >> > > > > > 2009-05-14 20:28:29,881 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > handler 7 on 9000: starting
> >> > > > > > 2009-05-14 20:28:29,881 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > handler 8 on 9000: starting
> >> > > > > > 2009-05-14 20:28:29,882 INFO org.apache.hadoop.ipc.Server: IPC
> >> > Server
> >> > > > > > handler 9 on 9000: starting
> >> > > > > > 2009-05-14 20:33:35,774 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > Roll
> >> > > > Edit
> >> > > > > > Log from 192.168.0.1
> >> > > > > > 2009-05-14 20:33:35,775 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > Number
> >> > > > of
> >> > > > > > transactions: 0 Total time for transactions(ms): 0 Number of
> >> syncs:
> >> > 0
> >> > > > > > SyncTimes(ms): 0
> >> > > > > > 2009-05-14 20:33:36,310 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > Roll
> >> > > > > > FSImage
> >> > > > > > from 192.168.0.1
> >> > > > > > 2009-05-14 20:33:36,311 INFO
> org.apache.hadoop.fs.FSNamesystem:
> >> > > Number
> >> > > > of
> >> > > > > > transactions: 0 Total time for transactions(ms): 0 Number of
> >> syncs:
> >> > 0
> >> > > > > > SyncTimes(ms): 0
> >> > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > > 3) my hadoop-site.xml for refrence
> >> > > > > >
> >> > > > > > <?xml version="1.0"?>
> >> > > > > > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >> > > > > >
> >> > > > > > <!-- Put site-specific property overrides in this file. -->
> >> > > > > > <configuration>
> >> > > > > >  <property>
> >> > > > > >    <name>fs.default.name</name>
> >> > > > > >    <value>hdfs://hadoopmaster.local:9000</value>
> >> > > > > >  </property>
> >> > > > > >  <property>
> >> > > > > >    <name>mapred.job.tracker</name>
> >> > > > > >    <value>hadoopmaster.local:9001</value>
> >> > > > > >  </property>
> >> > > > > >  <property>
> >> > > > > >    <name>dfs.replication</name>
> >> > > > > >    <value>3</value>
> >> > > > > >  </property>
> >> > > > > > <property>
> >> > > > > >  <name>mapred.child.java.opts</name>
> >> > > > > >  <value>-Xmx512m</value>
> >> > > > > > </property>
> >> > > > > > <property>
> >> > > > > >  <name>hadoop.tmp.dir</name>
> >> > > > > >  <value>/Hadoop/Temp</value>
> >> > > > > >  <description>A base for other temporary
> >> directories.</description>
> >> > > > > > </property>
> >> > > > > > <property>
> >> > > > > >  <name>dfs.data.dir</name>
> >> > > > > >  <value>/Hadoop/Data,/data/Hadoop</value>
> >> > > > > >  <description>Determines where on the local filesystem an DFS
> >> data
> >> > > node
> >> > > > > >  should store its blocks.  If this is a comma-delimited
> >> > > > > >  list of directories, then data will be stored in all named
> >> > > > > >  directories, typically on different devices.
> >> > > > > >  Directories that do not exist are ignored.
> >> > > > > >  </description>
> >> > > > > > </property>
> >> > > > > > </configuration>
> >> > > > > >
> >> > > > > >
> >> > > > > > the main thing i find in log it says "*2009-05-14 20:28:23,989
> >> INFO
> >> > > > > > org.apache.hadoop.dfs.StateChange: STATE* Network topology has
> 0
> >> > > racks
> >> > > > > and
> >> > > > > > 0
> >> > > > > > datanodes" *which means it cannot start datanodes*.*but why is
> it
> >> > so?
> >> > > I
> >> > > > > > have
> >> > > > > > all my datanodes in my slave.xml and it detects that on start
> up
> >> > > screen
> >> > > > > > when
> >> > > > > > i give start-dfs.sh.
> >> > > > > >
> >> > > > > >
> >> > > > > > Can anyone throw some lights on my problem.
> >> > > > > >
> >> > > > > > Thanks
> >> > > > > > Pankil
> >> > > > > >
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > Alpha Chapters of my book on Hadoop are available
> >> > > > > http://www.apress.com/book/view/9781430219422
> >> > > > > www.prohadoopbook.com a community for Hadoop Professionals
> >> > > > >
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > > Alpha Chapters of my book on Hadoop are available
> >> > > http://www.apress.com/book/view/9781430219422
> >> > > www.prohadoopbook.com a community for Hadoop Professionals
> >> > >
> >> >
> >>
> >>
> >>
> >> --
> >> Alpha Chapters of my book on Hadoop are available
> >> http://www.apress.com/book/view/9781430219422
> >> www.prohadoopbook.com a community for Hadoop Professionals
> >>
> >
>



-- 
Pro Hadoop, a book to guide you from beginner to hadoop mastery,
http://www.amazon.com/dp/1430219424?tag=jewlerymall
www.prohadoopbook.com a community for Hadoop Professionals

Reply via email to