are r able to create a file in /tmp directory with the same user.
this is b'coz there is an error - /tmp/hadoop-user-namenode.pid: 
Permission denied
Thanks n Regards
Nishi Gupta
Tata Consultancy Services
Mailto: [EMAIL PROTECTED]
Website: http://www.tcs.com
____________________________________________
Experience certainty.   IT Services
                        Business Solutions
                        Outsourcing
____________________________________________



Leeau <[EMAIL PROTECTED]> 
12/04/2008 04:13 PM
Please respond to
core-user@hadoop.apache.org


To
core-user@hadoop.apache.org
cc

Subject
an error on namenode startup,






Dear,

I want to config a 4-site hadoop cluster. but failed. Who can help me to
know why? and how can i start it? thanks.


environment:

Cent OS 5.2,

hadoop-site.xml setting:
fs.default.name hdfs://master.cloud:9000
mapred.job.tracker hdfs://master.cloud:9001
hadoop.tmp.dir /home/user/hadoop/tmp/
mapred.chile.java.opts Xmls512M


when i run startup-dfs, it shows:

$ start-dfs.sh
starting namenode, logging to
/home/user/hadoop-0.17.2.1/bin/../logs/hadoop-user-namenode-master.cloud.out
/home/user/hadoop-0.17.2.1/bin/hadoop-daemon.sh: line 117:
/tmp/hadoop-user-namenode.pid: Permission denied
slave3.cloud: starting datanode, logging to
/home/user/hadoop-0.17.2.1/bin/../logs/hadoop-user-datanode-slave3.cloud.out
slave2.cloud: starting datanode, logging to
/home/user/hadoop-0.17.2.1/bin/../logs/hadoop-user-datanode-slave2.cloud.out
slave4.cloud: starting datanode, logging to
/home/user/hadoop-0.17.2.1/bin/../logs/hadoop-user-datanode-slave4.cloud.out
master.cloud: starting secondarynamenode, logging to
/home/user/hadoop-0.17.2.1/bin/../logs/hadoop-user-secondarynamenode-master.cloud.out


the log shows:
2008-12-04 17:59:10,696 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master.cloud/10.100.4.226
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.17.2.1
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r 
684969;
compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
************************************************************/
2008-12-04 17:59:10,823 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=NameNode, port=9000
2008-12-04 17:59:10,830 INFO org.apache.hadoop.dfs.NameNode: Namenode up 
at:
master.cloud/10.100.4.226:9000
2008-12-04 17:59:10,834 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=null
2008-12-04 17:59:10,838 INFO org.apache.hadoop.dfs.NameNodeMetrics:
Initializing NameNodeMeterics using context
object:org.apache.hadoop.metrics.spi.NullContext
2008-12-04 17:59:10,924 INFO org.apache.hadoop.fs.FSNamesystem:
fsOwner=user,users
2008-12-04 17:59:10,924 INFO org.apache.hadoop.fs.FSNamesystem:
supergroup=supergroup
2008-12-04 17:59:10,924 INFO org.apache.hadoop.fs.FSNamesystem:
isPermissionEnabled=true
2008-12-04 17:59:10,983 INFO org.apache.hadoop.fs.FSNamesystem: Finished
loading FSImage in 102 msecs
2008-12-04 17:59:10,985 INFO org.apache.hadoop.dfs.StateChange: STATE*
Leaving safe mode after 0 secs.
2008-12-04 17:59:10,986 INFO org.apache.hadoop.dfs.StateChange: STATE*
Network topology has 0 racks and 0 datanodes
2008-12-04 17:59:10,986 INFO org.apache.hadoop.dfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2008-12-04 17:59:10,993 INFO org.apache.hadoop.fs.FSNamesystem: Registered
FSNamesystemStatusMBean
2008-12-04 17:59:11,066 INFO org.mortbay.util.Credential: Checking 
Resource
aliases
2008-12-04 17:59:11,176 INFO org.mortbay.http.HttpServer: Version
Jetty/5.1.4
2008-12-04 17:59:11,178 INFO org.mortbay.util.Container: Started 
HttpContext
[/static,/static]
2008-12-04 17:59:11,178 INFO org.mortbay.util.Container: Started 
HttpContext
[/logs,/logs]
2008-12-04 17:59:11,557 INFO org.mortbay.util.Container: Started
[EMAIL PROTECTED]
2008-12-04 17:59:11,618 INFO org.mortbay.util.Container: Started
WebApplicationContext[/,/]
2008-12-04 17:59:11,619 WARN org.mortbay.util.ThreadedServer: Failed to
start: [EMAIL PROTECTED]:50070
2008-12-04 17:59:11,619 WARN org.apache.hadoop.fs.FSNamesystem:
ReplicationMonitor thread received
InterruptedException.java.lang.InterruptedException: sleep interrupted
2008-12-04 17:59:11,620 ERROR org.apache.hadoop.fs.FSNamesystem:
java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
at
org.apache.hadoop.dfs.FSNamesystem$ResolutionMonitor.run(FSNamesystem.java:1931)
at java.lang.Thread.run(Thread.java:619)

2008-12-04 17:59:11,621 INFO org.apache.hadoop.fs.FSNamesystem: Number of
transactions: 0 Total time for transactions(ms): 0 Number of syncs: 0
SyncTimes(ms): 0
2008-12-04 17:59:11,674 INFO org.apache.hadoop.ipc.Server: Stopping server
on 9000
2008-12-04 17:59:11,730 ERROR org.apache.hadoop.dfs.NameNode:
java.net.BindException: Address already in use
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:359)
at java.net.ServerSocket.bind(ServerSocket.java:319)
at java.net.ServerSocket.<init>(ServerSocket.java:185)
at 
org.mortbay.util.ThreadedServer.newServerSocket(ThreadedServer.java:391)
at org.mortbay.util.ThreadedServer.open(ThreadedServer.java:477)
at org.mortbay.util.ThreadedServer.start(ThreadedServer.java:503)
at org.mortbay.http.SocketListener.start(SocketListener.java:203)
at org.mortbay.http.HttpServer.doStart(HttpServer.java:761)
at org.mortbay.util.Container.start(Container.java:72)
at
org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:207)
at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:335)
at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:255)
at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:133)
at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:178)
at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:164)
at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:846)
at org.apache.hadoop.dfs.NameNode.main(NameNode.java:855)

2008-12-04 17:59:11,733 INFO org.apache.hadoop.dfs.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master.cloud/10.100.4.226
************************************************************/


in web, it shows:

Hadoop NameNode localhost:9100
Cluster Summary***1 files and directories, 0 blocks = 1 total. Heap Size 
is
6.9 MB / 992.31 MB (0%)
*   Capacity : 0 KB DFS Remaining : 0 KB DFS Used : 0 KB DFS Used% : 0 % 
Live
Nodes <#LiveNodes> : 0 Dead Nodes <#DeadNodes> : 0

------------------------------
There are no datanodes in the cluster

ForwardSourceID:NT00028CC2 
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


Reply via email to