This is Hadoop 2.0, but using the separate MR1 package 
(hadoop-2.0.0-mr1-cdh4.1.3), not yarn.  I formatted the namenode ("./bin/hadoop 
namenode -format") and saw no errors in the shell or in the logs/[namenode].log 
file (in fact, simply formatting the namenode doesn't even create the log file 
yet).  I believe that merely formatting the namenode shouldn't leave any 
persistent java processes running, so I wouldn't expect "ps aux | grep java" to 
show anything, which of course it doesn't.

I then started the namenode with "./bin/hadoop-daemon.sh start namenode".  This 
produces the log file and still shows no errors.  The final entry in the log is:
2013-02-19 19:15:19,477 INFO org.apache.hadoop.ipc.Server: Starting Socket 
Reader #1 for port 9000

Curiously, I still don't see any java processes running and netstat doesn't 
show any obvious 9000 listeners.  I get this:
$ netstat -a -t --numeric-ports -p
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address               Foreign Address             
State       PID/Program name   
tcp        0      0 localhost:25                *:*                         
LISTEN      -                  
tcp        0      0 *:22                        *:*                         
LISTEN      -                  
tcp        0      0 ip-13-0-177-11:60765        ec2-50-19-38-112.compute:22 
ESTABLISHED 23591/ssh          
tcp        0      0 ip-13-0-177-11:22           13.0.177.165:56984          
ESTABLISHED -                  
tcp        0      0 ip-13-0-177-11:22           13.0.177.165:38081          
ESTABLISHED -                  
tcp        0      0 *:22                        *:*                         
LISTEN      -                  

Note that ip-13-0-177-11 is the current machine (it is also specified as the 
master in /etc/hosts and is indicated via localhost in fs.default.name on port 
9000 (fs.default.name = "hdfs://localhost:9000")).  So, at this point, I'm 
beginning to get confused because I don't see a java namenode process and I 
don't see a port 9000 listener...but still haven't seen any blatant error 
messages.

Next, I try "hadoop fs -ls /".  I then get the shell error I have been 
wrestling with recently:
ls: Call From ip-13-0-177-11/127.0.0.1 to localhost:9000 failed on connection 
exception: java.net.ConnectException: Connection refused; For more details see: 
 http://wiki.apache.org/hadoop/ConnectionRefused

Furthermore, this last step adds the following entry to the namenode log file:
2013-02-19 19:15:20,434 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: ReplicationMonitor 
thread received InterruptedException.
java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3025)
        at java.lang.Thread.run(Thread.java:679)
2013-02-19 19:15:20,438 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager: Monitor 
interrupted: java.lang.InterruptedException: sleep interrupted
2013-02-19 19:15:20,442 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started 
for active state
2013-02-19 19:15:20,442 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started 
for standby state
2013-02-19 19:15:20,442 INFO org.apache.hadoop.ipc.Server: Stopping server on 
9000
2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
Stopping NameNode metrics system...
2013-02-19 19:15:20,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
NameNode metrics system stopped.
2013-02-19 19:15:20,445 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
NameNode metrics system shutdown complete.
2013-02-19 19:15:20,445 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: 
Exception in namenode join
java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH
        at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:560)
        at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:247)
        at org.apache.hadoop.http.HttpServer.<init>(HttpServer.java:171)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.<init>(NameNodeHttpServer.java:89)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:87)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:547)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:480)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:443)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
2013-02-19 19:15:20,447 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1
2013-02-19 19:15:20,474 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
************************************************************/

This is particularly confusing because, while the hadoop-2.0.0-mr1-cdh4.1.3/ 
dir does have a webapps/ dir, there is no "hdfs" file or dir in that webapps/.  
It contains job/, static/, and task/.

If I start over from a freshly formatted namenode and take a slightly different 
approach -- if I try to start the datanode immediately after starting the 
namenode -- once again it fails, and in a very similar way.  This time the 
command to start the datanode has two effects: the namenode log still can't 
find webapps/hdfs, just as shown above, and also, there is now a datanode log 
file, and it likewise can't find webapps/datanode 
("java.io.FileNotFoundException: webapps/datanode not found in CLASSPATH") so I 
get two very similar errors at once, one on the namenode and one on the 
datanode.

This webapps/ dir business makes no sense since the files (or directories) the 
logs claim to be looking for inside webapps/ ("hdfs" and "datanode") don't 
exist!

Thoughts?

________________________________________________________________________________
Keith Wiley     kwi...@keithwiley.com     keithwiley.com    music.keithwiley.com

"It's a fine line between meticulous and obsessive-compulsive and a slippery
rope between obsessive-compulsive and debilitatingly slow."
                                           --  Keith Wiley
________________________________________________________________________________

Reply via email to