Thanks Yusaku,

I am using Ambari v 1.6.1. Yes, the default value it took for fs.defaultFS
is "hdfs://server_1:8020"

The output of hostname -f is: server_1

And, the contents of /etc/hosts is:

127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.21.138 server_1
192.168.21.137 ambari_server

the FQDN I gave during host selection was: server_1

As of now, the error is:

safemode: Incomplete HDFS URI, no host: hdfs://server_1:8020
2014-09-16 23:02:41,225 - Retrying after 10 seconds. Reason: Execution of
'su - hdfs -c 'hadoop dfsadmin -safemode get' | grep 'Safe mode is OFF''
returned 1. DEPRECATED: Use of this script to execute hdfs command is
deprecated.
Instead use the hdfs command for it.

safemode: Incomplete HDFS URI, no host: hdfs://server_1:8020


Please advise where I am making the mistake?

-Ravi

On Wed, Sep 17, 2014 at 1:59 AM, Yusaku Sako <yus...@hortonworks.com> wrote:

> Hi Ravi,
>
> What version of Ambari did you use, and how did you install the cluster?
> Not sure if this would help, but on small test clusters, you should
> define /etc/hosts on each machine, like so:
>
> 127.0.0.1 <localhost and other default entries>
> ::1 <localhost and other default entries>
> 192.168.64.101 host1.mycompany.com host1
> 192.168.64.102 host2.mycompany.com host2
> 192.168.64.103 host3.mycompany.com host3
>
> Make sure that on each machine, "hostname -f" returns the FQDN (such
> as host1.mycompany.com) and "hostname" returns the short name (such as
> host1).  Also, make sure that you can resolve all other hosts by FQDN.
>
> fs.defaultFS is set up automatically by Ambari and you should not have
> to adjust it, provided that the networking is configured properly.
> Ambari sets it to "hdfs://<FQDN of NN host>:8020" (e.g.,
> "hdfs://host1.mycompany.com:8020)
>
> Yusaku
>
> On Tue, Sep 16, 2014 at 12:00 PM, Ravi Itha <ithar...@gmail.com> wrote:
> > All,
> >
> > My Ambari cluster setup is below:
> >
> > Server 1: Ambari Server was installed
> > Server 2: Ambari Agent was installed
> > Server 3: Ambari Agent was installed
> >
> > I create cluster with Server 2 and Server 3 and installed
> >
> > Server 2 has NameNode
> > Server 3 has SNameNode & DataNode
> >
> > When I try to start NameNode from UI, it does not start
> >
> > Following are the errors:
> >
> > 1. safemode: Call From server_1/192.168.21.138 to server_1:8020 failed
> on
> > connection exception: java.net.ConnectException: Connection refused; For
> > more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> >
> > In this case, the value of fs.defaultFS = hdfs://192.168.21.138  (This
> ip is
> > server_1's ip. I gave server_1 as the FQDN)
> >
> > 2. safemode: Call From server_1/192.168.21.138 to localhost:9000 failed
> on
> > connection exception: java.net.ConnectException: Connection refused; For
> > more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> >
> > In this case, the value of fs.defaultFS = hdfs://localhost
> >
> > Also, I cannot leave this field as blank.
> >
> > So can someone, please help me what should be the right value to be set
> here
> > and I how I can fix the issue.
> >
> > ~Ravi Itha
> >
> >
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

Reply via email to