Re: NameNode can't be started via Ambari Web UI -- Problematic Property is: fs.defaultFS

2014-09-17 Thread Ravi Itha
Thanks Yusaku,

I am using Ambari v 1.6.1. Yes, the default value it took for fs.defaultFS
is hdfs://server_1:8020

The output of hostname -f is: server_1

And, the contents of /etc/hosts is:

127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.21.138 server_1
192.168.21.137 ambari_server

the FQDN I gave during host selection was: server_1

As of now, the error is:

safemode: Incomplete HDFS URI, no host: hdfs://server_1:8020
2014-09-16 23:02:41,225 - Retrying after 10 seconds. Reason: Execution of
'su - hdfs -c 'hadoop dfsadmin -safemode get' | grep 'Safe mode is OFF''
returned 1. DEPRECATED: Use of this script to execute hdfs command is
deprecated.
Instead use the hdfs command for it.

safemode: Incomplete HDFS URI, no host: hdfs://server_1:8020


Please advise where I am making the mistake?

-Ravi

On Wed, Sep 17, 2014 at 1:59 AM, Yusaku Sako yus...@hortonworks.com wrote:

 Hi Ravi,

 What version of Ambari did you use, and how did you install the cluster?
 Not sure if this would help, but on small test clusters, you should
 define /etc/hosts on each machine, like so:

 127.0.0.1 localhost and other default entries
 ::1 localhost and other default entries
 192.168.64.101 host1.mycompany.com host1
 192.168.64.102 host2.mycompany.com host2
 192.168.64.103 host3.mycompany.com host3

 Make sure that on each machine, hostname -f returns the FQDN (such
 as host1.mycompany.com) and hostname returns the short name (such as
 host1).  Also, make sure that you can resolve all other hosts by FQDN.

 fs.defaultFS is set up automatically by Ambari and you should not have
 to adjust it, provided that the networking is configured properly.
 Ambari sets it to hdfs://FQDN of NN host:8020 (e.g.,
 hdfs://host1.mycompany.com:8020)

 Yusaku

 On Tue, Sep 16, 2014 at 12:00 PM, Ravi Itha ithar...@gmail.com wrote:
  All,
 
  My Ambari cluster setup is below:
 
  Server 1: Ambari Server was installed
  Server 2: Ambari Agent was installed
  Server 3: Ambari Agent was installed
 
  I create cluster with Server 2 and Server 3 and installed
 
  Server 2 has NameNode
  Server 3 has SNameNode  DataNode
 
  When I try to start NameNode from UI, it does not start
 
  Following are the errors:
 
  1. safemode: Call From server_1/192.168.21.138 to server_1:8020 failed
 on
  connection exception: java.net.ConnectException: Connection refused; For
  more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
  In this case, the value of fs.defaultFS = hdfs://192.168.21.138  (This
 ip is
  server_1's ip. I gave server_1 as the FQDN)
 
  2. safemode: Call From server_1/192.168.21.138 to localhost:9000 failed
 on
  connection exception: java.net.ConnectException: Connection refused; For
  more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
  In this case, the value of fs.defaultFS = hdfs://localhost
 
  Also, I cannot leave this field as blank.
 
  So can someone, please help me what should be the right value to be set
 here
  and I how I can fix the issue.
 
  ~Ravi Itha
 
 
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



DataNode not utilising all disk space

2014-09-17 Thread Charles Robertson
Hi all,

The other day I added a new slave node to my cluster using the Ambari 'add
host' functionality. The other nodes in my cluster had 8 Gb drives, which
is not enough, so this new host I set up with a single 100 Gb drive. I did
not do anything to change the data directory.

However, Amabri is reporting on the 'Hosts' tab that the new node is using
5 of 8 GB - why is this not making the full 100 Gb available? I understood
Hadoop to work by configuring how much space *not* to use, and HDFS would
make use of the rest?

Thanks,
Charles


Re: NameNode can't be started via Ambari Web UI -- Problematic Property is: fs.defaultFS

2014-09-17 Thread Ravi Itha
Yusaku,

I have an update on this. In the meanwhile I did the below:


   - Created a brand new VM and assign the host name as
   server3.mycompany.com
   - Updated both /etc/hosts  /etc/sysconfig/network files with that
   hostname.
   - Installed both Ambari Server and Agent on the same host
   - Created a single node cluster via Ambari Web UI
   - Installed NameNode + SNameNode + DataNode + YARN + other services


This time NameNode started without any issue. Without any surprise, the
default value it took for fs.defaultFS was hdfs://
server3.mycompany.com:8020

The other difference is:

This time, when I gave host name as *server3.mycompany.com
http://server3.mycompany.com* , it did not say this was not a valid FQDN.
However, it did give me a warning in my earlier case i.e. Server_1

So the FQDN something like Server_1 is not a good practice?

Thanks for your help.

~Ravi Itha



On Wed, Sep 17, 2014 at 11:40 AM, Ravi Itha ithar...@gmail.com wrote:


 Thanks Yusaku,

 I am using Ambari v 1.6.1. Yes, the default value it took for fs.defaultFS
 is hdfs://server_1:8020

 The output of hostname -f is: server_1

 And, the contents of /etc/hosts is:

 127.0.0.1 localhost.localdomain localhost
 ::1 localhost6.localdomain6 localhost6
 192.168.21.138 server_1
 192.168.21.137 ambari_server

 the FQDN I gave during host selection was: server_1

 As of now, the error is:

 safemode: Incomplete HDFS URI, no host: hdfs://server_1:8020
 2014-09-16 23:02:41,225 - Retrying after 10 seconds. Reason: Execution of
 'su - hdfs -c 'hadoop dfsadmin -safemode get' | grep 'Safe mode is OFF''
 returned 1. DEPRECATED: Use of this script to execute hdfs command is
 deprecated.
 Instead use the hdfs command for it.

 safemode: Incomplete HDFS URI, no host: hdfs://server_1:8020


 Please advise where I am making the mistake?

 -Ravi

 On Wed, Sep 17, 2014 at 1:59 AM, Yusaku Sako yus...@hortonworks.com
 wrote:


 Hi Ravi,

 What version of Ambari did you use, and how did you install the cluster?
 Not sure if this would help, but on small test clusters, you should
 define /etc/hosts on each machine, like so:

 127.0.0.1 localhost and other default entries
 ::1 localhost and other default entries
 192.168.64.101 host1.mycompany.com host1
 192.168.64.102 host2.mycompany.com host2
 192.168.64.103 host3.mycompany.com host3

 Make sure that on each machine, hostname -f returns the FQDN (such
 as host1.mycompany.com) and hostname returns the short name (such as
 host1).  Also, make sure that you can resolve all other hosts by FQDN.

 fs.defaultFS is set up automatically by Ambari and you should not have
 to adjust it, provided that the networking is configured properly.
 Ambari sets it to hdfs://FQDN of NN host:8020 (e.g.,
 hdfs://host1.mycompany.com:8020)

 Yusaku

 On Tue, Sep 16, 2014 at 12:00 PM, Ravi Itha ithar...@gmail.com wrote:
  All,
 
  My Ambari cluster setup is below:
 
  Server 1: Ambari Server was installed
  Server 2: Ambari Agent was installed
  Server 3: Ambari Agent was installed
 
  I create cluster with Server 2 and Server 3 and installed
 
  Server 2 has NameNode
  Server 3 has SNameNode  DataNode
 
  When I try to start NameNode from UI, it does not start
 
  Following are the errors:
 
  1. safemode: Call From server_1/192.168.21.138 to server_1:8020 failed
 on
  connection exception: java.net.ConnectException: Connection refused; For
  more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
  In this case, the value of fs.defaultFS = hdfs://192.168.21.138  (This
 ip is
  server_1's ip. I gave server_1 as the FQDN)
 
  2. safemode: Call From server_1/192.168.21.138 to localhost:9000
 failed on
  connection exception: java.net.ConnectException: Connection refused; For
  more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
  In this case, the value of fs.defaultFS = hdfs://localhost
 
  Also, I cannot leave this field as blank.
 
  So can someone, please help me what should be the right value to be set
 here
  and I how I can fix the issue.
 
  ~Ravi Itha
 
 
 


 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.






Re: Can't add ganglia monitor

2014-09-17 Thread Charles Robertson
Hi Jeff,

Thanks for replying - I'm using Ambari 1.6.0. Ganglia Monitor is not
available on +Add button. The wiki page is useful - there was a failure
during adding the host, so I guess I need to decide what my 'favourite REST
tool' is :)

Thanks for your help,
Charles

On 17 September 2014 13:37, Jeff Sposetti j...@hortonworks.com wrote:

 Which version of Ambari are you using?

 Browse to Hosts and then to the Host in question. Is Ganglia Monitor an
 option on the + Add button?

 BTW, checkout this wiki page. It has some info on adding components to
 hosts (as part of adding/removing hosts from a cluster).

 https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=40508365



 On Wed, Sep 17, 2014 at 4:16 AM, Charles Robertson 
 charles.robert...@gmail.com wrote:

 Hi all,

 I have a node without ganglia monitor on it, and when I navigate to the
 host in Ambari it doesn't give me the option to add it to the host. This
 (seems) to be giving me warnings in Ganglia that it can't connect to
 certain services. This also seems to be why Ambari isn't reporting disk
 space usage for that node.

 How can I add the ganglia monitor to this node

 Thanks,
 Charles



 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.


Re: Can't add ganglia monitor

2014-09-17 Thread Jeff Sposetti
Looks like you are missing the x-requested-by header:

-H X-Requested-By: ambari


On Wed, Sep 17, 2014 at 11:49 AM, Charles Robertson
charles.robert...@gmail.com wrote:
 Hi Jeff,

 The failure was attributable to PICNIC, so I don't think there's much the
 Ambari devs could do :)

 I've tried using the REST API, but I keep getting 400 bad request messages,
 and just can't see what's wrong:

  curl -u {user:admin} -i -X DELETE http://{ambari
 server}:8080/api/v1/clusters/{my cluster name}/hosts/{slave node
 name}/host_components/GANGLIA_MONITOR

 Any advice?

 Thanks again,
 Charles

 On 17 September 2014 13:57, Jeff Sposetti j...@hortonworks.com wrote:

 Ok. In Ambari 1.6.1, Ambari Web added the ability to add Ganglia Monitors
 to hosts. So that's why you don't see it. REST API is fine or you can
 upgrade to 1.6.1 as well.

 https://issues.apache.org/jira/browse/AMBARI-5530

 On a separate note: if you can file a JIRA on the failure you received
 during Add Host, that might be helpful in case there is a known issue or
 something that needs attention.

 https://issues.apache.org/jira/browse/AMBARI




 On Wed, Sep 17, 2014 at 8:54 AM, Charles Robertson
 charles.robert...@gmail.com wrote:

 Hi Jeff,

 Thanks for replying - I'm using Ambari 1.6.0. Ganglia Monitor is not
 available on +Add button. The wiki page is useful - there was a failure
 during adding the host, so I guess I need to decide what my 'favourite REST
 tool' is :)

 Thanks for your help,
 Charles

 On 17 September 2014 13:37, Jeff Sposetti j...@hortonworks.com wrote:

 Which version of Ambari are you using?

 Browse to Hosts and then to the Host in question. Is Ganglia Monitor an
 option on the + Add button?

 BTW, checkout this wiki page. It has some info on adding components to
 hosts (as part of adding/removing hosts from a cluster).


 https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=40508365



 On Wed, Sep 17, 2014 at 4:16 AM, Charles Robertson
 charles.robert...@gmail.com wrote:

 Hi all,

 I have a node without ganglia monitor on it, and when I navigate to the
 host in Ambari it doesn't give me the option to add it to the host. This
 (seems) to be giving me warnings in Ganglia that it can't connect to 
 certain
 services. This also seems to be why Ambari isn't reporting disk space 
 usage
 for that node.

 How can I add the ganglia monitor to this node

 Thanks,
 Charles



 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader 
 of
 this message is not the intended recipient, you are hereby notified that 
 any
 printing, copying, dissemination, distribution, disclosure or forwarding of
 this communication is strictly prohibited. If you have received this
 communication in error, please contact the sender immediately and delete it
 from your system. Thank You.




 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader of
 this message is not the intended recipient, you are hereby notified that any
 printing, copying, dissemination, distribution, disclosure or forwarding of
 this communication is strictly prohibited. If you have received this
 communication in error, please contact the sender immediately and delete it
 from your system. Thank You.



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: NameNode can't be started via Ambari Web UI -- Problematic Property is: fs.defaultFS

2014-09-17 Thread Yusaku Sako
Ravi,

Yes, you should configure it so that hostname -f returns the FQDN
(with the domain name).
Also server_1 is not a valid hostname (you cannot use an underscore
in the hostname).
The OS might actually let you do this, but this is against the
standard, so certain software may not work (for example, when parsing
with some regex excepted for valid hostnames).

Yusaku

On Wed, Sep 17, 2014 at 1:57 AM, Ravi Itha ithar...@gmail.com wrote:
 Yusaku,

 I have an update on this. In the meanwhile I did the below:


 Created a brand new VM and assign the host name as server3.mycompany.com
 Updated both /etc/hosts  /etc/sysconfig/network files with that hostname.
 Installed both Ambari Server and Agent on the same host
 Created a single node cluster via Ambari Web UI
 Installed NameNode + SNameNode + DataNode + YARN + other services


 This time NameNode started without any issue. Without any surprise, the
 default value it took for fs.defaultFS was
 hdfs://server3.mycompany.com:8020

 The other difference is:

 This time, when I gave host name as server3.mycompany.com , it did not say
 this was not a valid FQDN. However, it did give me a warning in my earlier
 case i.e. Server_1

 So the FQDN something like Server_1 is not a good practice?

 Thanks for your help.

 ~Ravi Itha



 On Wed, Sep 17, 2014 at 11:40 AM, Ravi Itha ithar...@gmail.com wrote:


 Thanks Yusaku,

 I am using Ambari v 1.6.1. Yes, the default value it took for fs.defaultFS
 is hdfs://server_1:8020

 The output of hostname -f is: server_1

 And, the contents of /etc/hosts is:

 127.0.0.1 localhost.localdomain localhost
 ::1 localhost6.localdomain6 localhost6
 192.168.21.138 server_1
 192.168.21.137 ambari_server

 the FQDN I gave during host selection was: server_1

 As of now, the error is:

 safemode: Incomplete HDFS URI, no host: hdfs://server_1:8020
 2014-09-16 23:02:41,225 - Retrying after 10 seconds. Reason: Execution of
 'su - hdfs -c 'hadoop dfsadmin -safemode get' | grep 'Safe mode is OFF''
 returned 1. DEPRECATED: Use of this script to execute hdfs command is
 deprecated.
 Instead use the hdfs command for it.

 safemode: Incomplete HDFS URI, no host: hdfs://server_1:8020


 Please advise where I am making the mistake?

 -Ravi

 On Wed, Sep 17, 2014 at 1:59 AM, Yusaku Sako yus...@hortonworks.com
 wrote:


 Hi Ravi,

 What version of Ambari did you use, and how did you install the cluster?
 Not sure if this would help, but on small test clusters, you should
 define /etc/hosts on each machine, like so:

 127.0.0.1 localhost and other default entries
 ::1 localhost and other default entries
 192.168.64.101 host1.mycompany.com host1
 192.168.64.102 host2.mycompany.com host2
 192.168.64.103 host3.mycompany.com host3

 Make sure that on each machine, hostname -f returns the FQDN (such
 as host1.mycompany.com) and hostname returns the short name (such as
 host1).  Also, make sure that you can resolve all other hosts by FQDN.

 fs.defaultFS is set up automatically by Ambari and you should not have
 to adjust it, provided that the networking is configured properly.
 Ambari sets it to hdfs://FQDN of NN host:8020 (e.g.,
 hdfs://host1.mycompany.com:8020)

 Yusaku

 On Tue, Sep 16, 2014 at 12:00 PM, Ravi Itha ithar...@gmail.com wrote:
  All,
 
  My Ambari cluster setup is below:
 
  Server 1: Ambari Server was installed
  Server 2: Ambari Agent was installed
  Server 3: Ambari Agent was installed
 
  I create cluster with Server 2 and Server 3 and installed
 
  Server 2 has NameNode
  Server 3 has SNameNode  DataNode
 
  When I try to start NameNode from UI, it does not start
 
  Following are the errors:
 
  1. safemode: Call From server_1/192.168.21.138 to server_1:8020 failed
  on
  connection exception: java.net.ConnectException: Connection refused;
  For
  more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
  In this case, the value of fs.defaultFS = hdfs://192.168.21.138  (This
  ip is
  server_1's ip. I gave server_1 as the FQDN)
 
  2. safemode: Call From server_1/192.168.21.138 to localhost:9000 failed
  on
  connection exception: java.net.ConnectException: Connection refused;
  For
  more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
  In this case, the value of fs.defaultFS = hdfs://localhost
 
  Also, I cannot leave this field as blank.
 
  So can someone, please help me what should be the right value to be set
  here
  and I how I can fix the issue.
 
  ~Ravi Itha
 
 
 


 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 

Re: DataNode not utilising all disk space

2014-09-17 Thread Yusaku Sako
Hi Charles,

1.What does df -h show on this new node?

2. Can you post the output of the following API call for the host?
GET 
http://ambari-host:8080/api/v1/clusters/your-cluster-name/hosts/host-name

3. When you go to the NameNode UI (http://namenode-host:50070/), what
does it show?

Thanks,
Yusaku

On Wed, Sep 17, 2014 at 1:12 AM, Charles Robertson
charles.robert...@gmail.com wrote:
 Hi all,

 The other day I added a new slave node to my cluster using the Ambari 'add
 host' functionality. The other nodes in my cluster had 8 Gb drives, which is
 not enough, so this new host I set up with a single 100 Gb drive. I did not
 do anything to change the data directory.

 However, Amabri is reporting on the 'Hosts' tab that the new node is using 5
 of 8 GB - why is this not making the full 100 Gb available? I understood
 Hadoop to work by configuring how much space *not* to use, and HDFS would
 make use of the rest?

 Thanks,
 Charles

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.