Hi stack, The namenode, jobtracker and secondary namenode are working and no problem with them. The problem is when I run this command $host -v -t A `hostname`Trying "namenode"Host namenode not found: 3(NXDOMAIN) I don't know why, so I want to ask a question do I have to let hostname = namenode.example.com (in the canonical form) or can leave it "namenode". Note that my hostname is namenode. Thanks,
> Date: Mon, 14 May 2012 12:19:36 -0700 > Subject: Re: Problem with hbase master > From: st...@duboce.net > To: user@hbase.apache.org > > On Mon, May 14, 2012 at 8:20 AM, Dalia Sobhy <dalia.mohso...@hotmail.com> > wrote: > > > > Here is error msgs i receive.. > >> 12/05/14 09:16:17 FATAL master.HMaster: Unhandled exception. Starting > >> shutdown. > >> java.net.ConnectException: Call to namenode/10.0.2.3:8020 failed on > >> connection exception: java.net.ConnectException: Connection refused > >> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1134) > >> at org.apache.hadoop.ipc.Client.call(Client.java:1110) > >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226) > >> at $Proxy6.getProtocolVersion(Unknown Source) > >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398) > >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384) > >> at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java: > >> 123) > >> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:246) > >> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:208) > >> at > >> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java: > >> 89) > >> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java: > >> 1563) > > The above looks pretty basic. It looks like you are not running hdfs > or at least its not running where hbase has been told to go look for > it. > > Please do not post raw log into mail messages. Please use a service > like pastebin. Logs in mail messages with line wrapping are hard to > parse. > > St.Ack