Fw: Hadoop 2 Namenode HA not working properly

2014-01-24 Thread Bruno Andrade


Begin forwarded message:

Date: Tue, 21 Jan 2014 09:35:23 +
From: Bruno Andrade b...@eurotux.com
To: user@hadoop.apache.org
Subject: Re: Hadoop 2 Namenode HA not working properly


Hey,

this is my hdfs-site.xml - http://pastebin.com/qpELkwH8
this is my core-site.xml:

configuration
 property
 namefs.defaultFS/name
 valuehdfs://blabla-hadoop/value
 /property

 property
 namehadoop.tmp.dir/name
 value/opt/hadoop/hadoop/tmp/value
 /property
/configuration

I kill only the namenode process and nothing happen, then I killed zkfc 
process and the transition happend, the second namenode become active.

Thanks.



On 01/20/2014 06:44 PM, Jing Zhao wrote:
 Hi Bruno,

  Could you post your configuration? Also, when you killed one of
 the NN, you mean only killing the NN process or you shutdown the whole
 machine?

 Thanks,
 -Jing

 On Mon, Jan 20, 2014 at 4:11 AM, Bruno Andrade b...@eurotux.com
 wrote:
 Hey,

 I have configured a Hadoop v2.2.0 cluster with QJM and Zookeeper for
 HA and automatic failover.
 But I'm having a problem. If I test the automatic failover, by
 killing one of the namenodes, nothing happens. But if I kill the
 zkfc of that namenode, then zookeeper elects the other namenode as
 active.

 What can it be the problem?

 Thanks.

 --
 Bruno Andrade b...@eurotux.com
 Programador (ID)
 Eurotux Informática, S.A. | www.eurotux.com
 (t) +351 253 680 300 (m) +351 936 293 858

-- 
Bruno Andrade b...@eurotux.com
Programador (ID)
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 936 293 858



-- 
Bruno Andrade b...@eurotux.com
Programador (ID)
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 936 293 858

  
  
Hey,

this is my hdfs-site.xml - http://pastebin.com/qpELkwH8
this is my core-site.xml:

  configuration
   property
namefs.defaultFS/name
valuehdfs://blabla-hadoop/value
   /property
  
   property
namehadoop.tmp.dir/name
value/opt/hadoop/hadoop/tmp/value
   /property
  /configuration

I kill only the namenode process and nothing happen, then I killed
zkfc process and the transition happend, the second namenode become
active.

Thanks.



On 01/20/2014 06:44 PM, Jing Zhao
  wrote:


  Hi Bruno,

Could you post your configuration? Also, when you killed one of
the NN, you mean only killing the NN process or you shutdown the whole
machine?

Thanks,
-Jing

On Mon, Jan 20, 2014 at 4:11 AM, Bruno Andrade b...@eurotux.com wrote:

  
Hey,

I have configured a Hadoop v2.2.0 cluster with QJM and Zookeeper for HA and
automatic failover.
But I'm having a problem. If I test the automatic failover, by killing one
of the namenodes, nothing happens. But if I kill the zkfc of that namenode,
then zookeeper elects the other namenode as active.

What can it be the problem?

Thanks.

--
Bruno Andrade b...@eurotux.com
Programador (ID)
Eurotux Informtica, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 936 293 858

  
  



-- 
Bruno Andrade b...@eurotux.com
Programador (ID)
Eurotux Informtica, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 936 293 858
  



Re: Hadoop 2 Namenode HA not working properly

2014-01-21 Thread Bruno Andrade

Hey,

this is my hdfs-site.xml - http://pastebin.com/qpELkwH8
this is my core-site.xml:

configuration
property
namefs.defaultFS/name
valuehdfs://blabla-hadoop/value
/property

property
namehadoop.tmp.dir/name
value/opt/hadoop/hadoop/tmp/value
/property
/configuration

I kill only the namenode process and nothing happen, then I killed zkfc 
process and the transition happend, the second namenode become active.


Thanks.



On 01/20/2014 06:44 PM, Jing Zhao wrote:

Hi Bruno,

 Could you post your configuration? Also, when you killed one of
the NN, you mean only killing the NN process or you shutdown the whole
machine?

Thanks,
-Jing

On Mon, Jan 20, 2014 at 4:11 AM, Bruno Andrade b...@eurotux.com wrote:

Hey,

I have configured a Hadoop v2.2.0 cluster with QJM and Zookeeper for HA and
automatic failover.
But I'm having a problem. If I test the automatic failover, by killing one
of the namenodes, nothing happens. But if I kill the zkfc of that namenode,
then zookeeper elects the other namenode as active.

What can it be the problem?

Thanks.

--
Bruno Andrade b...@eurotux.com
Programador (ID)
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 936 293 858


--
Bruno Andrade b...@eurotux.com
Programador (ID)
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 936 293 858



Hadoop HA Namenode remote access

2013-11-15 Thread Bruno Andrade
Im configuring Hadoop 2.2.0 stable release with HA namenode but i dont 
know how to configure remote access to the cluster.


I have HA namenode configured with manual failover and i 
defined|dfs.nameservices|and i can access hdfs with nameservice from all 
the nodes included in the cluster, but not from outside.


I can perform operations on hdfs by contact directly the active 
namenode, but i dont want that, i want to contact the cluster and then 
be redirected to the active namenode. I think this is the normal 
configuration for a HA cluster.


Does anyone now how to do that?

(thanks in advance...)

--
Bruno Andrade b...@eurotux.com
Programador (ID)
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 936 293 858