Hey,
this is my hdfs-site.xml -> http://pastebin.com/qpELkwH8
this is my core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://blabla-hadoop</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/hadoop/tmp</value>
</property>
</configuration>
I kill only the namenode process and nothing happen, then I killed zkfc
process and the transition happend, the second namenode become active.
Thanks.
On 01/20/2014 06:44 PM, Jing Zhao wrote:
Hi Bruno,
Could you post your configuration? Also, when you killed one of
the NN, you mean only killing the NN process or you shutdown the whole
machine?
Thanks,
-Jing
On Mon, Jan 20, 2014 at 4:11 AM, Bruno Andrade <b...@eurotux.com> wrote:
Hey,
I have configured a Hadoop v2.2.0 cluster with QJM and Zookeeper for HA and
automatic failover.
But I'm having a problem. If I test the automatic failover, by killing one
of the namenodes, nothing happens. But if I kill the zkfc of that namenode,
then zookeeper elects the other namenode as active.
What can it be the problem?
Thanks.
--
Bruno Andrade <b...@eurotux.com>
Programador (I&D)
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 936 293 858
--
Bruno Andrade <b...@eurotux.com>
Programador (I&D)
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 936 293 858