Hi Keith,

When you run the format command on the namenode machine it actually starts
the namenode, formats it then shuts it down (see:
http://hadoop.apache.org/docs/stable/commands_manual.html). Before you run
the format command do you see any processes already listening on port 9212
via netstat -anlp | grep 9212 on the namenode? 

As per the recommendations on the link in the error message
(http://wiki.apache.org/hadoop/BindException) you could try changing the
port used by the namenode

I'm not familiar with deploying Hadoop on EC2 so I'm not sure if this is
different for EC2 deployments, however, the namenode usually listens on port
8020 for file system metadata operations so I guess you specified a
different port in the fs.default.parameter hdfs-site.xml?

Vijay

-----Original Message-----
From: Keith Wiley [mailto:kwi...@keithwiley.com] 
Sent: 19 February 2013 15:10
To: user@hadoop.apache.org
Subject: Re: Namenode formatting problem

Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it with
Hadoop 2.0 MR1 (which I think should behave almost exactly like older
versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us
journal nodes?  I'll try to read up more on this later today.  Thanks for
the tip.

On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:

> Because journal nodes are also be formated during NN format, so you need
to start all JN daemons firstly.
> 
> On Feb 19, 2013 7:01 AM, "Keith Wiley" <kwi...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the
shell, but the log shows this:
> 
> 2013-02-18 22:19:46,961 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode 
> join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java
:375)
>         at
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350
)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcS
erver.java:238)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.jav
a:452)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434
)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1140)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:120
> 4)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> 2013-02-18 22:19:46,990 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1 
> ************************************************************/
> 
> No java processes begin (although I wouldn't expect formatting the
namenode to start any processes, only starting the namenode or datanode
should do that), and "hadoop fs -ls /" gives me this:
> 
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on 
> connection exception: java.net.ConnectException: Connection refused; 
> For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
> 
> This is on EC2.  All of the nodes are in the same security group and the
security group has full inbound access.  I can ssh between all three
machines (client/master/slave) without a password ala authorized_keys.  I
can ping the master node from the client machine (although I don't know how
to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
behave on EC2 which makes port testing a little difficult.
> 
> Any ideas?

____________________________________________________________________________
____
Keith Wiley     kwi...@keithwiley.com     keithwiley.com
music.keithwiley.com

"What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio
than when I entered."
                                           --  Keith Wiley
____________________________________________________________________________
____


Reply via email to