Post your config files and in which method you are following for automatic
failover
On Mon, Dec 2, 2013 at 5:34 PM, YouPeng Yang yypvsxf19870...@gmail.comwrote:
Hi i
I'm testing the HA auto-failover within hadoop-2.2.0
The cluster can be manully failover ,however failed with the
result should look like below:
node1 node2
- ---
process1 proces3
process2 proces4
can some one please help in this..
thanks in advance..
--
Pavan Kumar Polineni
node to (SNN.JT,DN,TT) and all
are working. i keep the other data node like that only.
I changed the configurations to link up the NN and JT.
From here when i tried to run MR job this is not running ..
Please help Me. Thanks
--
Pavan Kumar Polineni
--
Pavan Kumar Polineni
keep the other data node like that only.
I changed the configurations to link up the NN and JT.
From here when i tried to run MR job this is not running ..
Please help Me. Thanks
--
Pavan Kumar Polineni
.
--
*From:* Pavan Kumar Polineni smartsunny...@gmail.com
*To:* user@hadoop.apache.org
*Sent:* Sunday, June 23, 2013 6:20 AM
*Subject:* MapReduce job not running - i think i keep all correct
configuration.
Hi all,
first i have a machine with all the demons are running
I am using Hadoop-1. I dont want HA.
On Wed, Jun 19, 2013 at 12:20 PM, Azuryy Yu azury...@gmail.com wrote:
hey Pavan,
Hadoop-2.* has HDFS HA, which hadoop version are you using?
On Wed, Jun 19, 2013 at 2:46 PM, Pavan Kumar Polineni
smartsunny...@gmail.com wrote:
I am checking
it using
secondary namenode?
and you want to know both how to crash the namenode as well how to recover
it
Is that correct ?
On Wed, Jun 19, 2013 at 12:34 PM, Pavan Kumar Polineni
smartsunny...@gmail.com wrote:
Hi Manoj,
If we power of the Host then the secondary name node also goes