Re: Unix script for identifying current active namenode in a HA cluster

2014-11-05 Thread Devopam Mittra
hi Nitin,
Thanks for the vital input around Hadoop Home addition. At times such
things totally go off the radar when you have customized your own
environment.

As suggested I have shared this on github :
https://github.com/devopam/hadoopHA
apologies if there is any problem on github as I have limited familiarity
with it :(


regards
Devopam



On Wed, Nov 5, 2014 at 12:31 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:

 +1
 If you can optionally add hadoop home directory in the script and use that
 in path, it can be used out of the box.

 Also can you share this on github

 On Wed, Nov 5, 2014 at 10:02 AM, Devopam Mittra devo...@gmail.com wrote:

 hi All,
 Please find attached a simple shell script to dynamically determine the
 active namenode in the HA Cluster and subsequently run the Hive job / query
 via Talend OS generated workflows.

 It was tried successfully on a HDP2.1 cluster with 2 nn, 7 dn running on
 CentOS 6.5.
 Each ETL job invokes this script first in our framework to derive the NN
 FQDN and then run the hive jobs subsequently to avoid failures.
 Takes a max. of 2 secs to execute (small cost in our case, as compared to
 dealing with a failure and then recalculating the NN to resubmit the job).

 Sharing it with you in case you can leverage the same without spending
 effort to code it.

 Do share your feedback/ fixes if you spot any.

 --
 Devopam Mittra
 Life and Relations are not binary




 --
 Nitin Pawar




-- 
Devopam Mittra
Life and Relations are not binary


findActiveNameNode.sh
Description: Bourne shell script


Re: Unix script for identifying current active namenode in a HA cluster

2014-11-05 Thread Nitin Pawar
looks good to me

thanks for the share

On Wed, Nov 5, 2014 at 5:15 PM, Devopam Mittra devo...@gmail.com wrote:

 hi Nitin,
 Thanks for the vital input around Hadoop Home addition. At times such
 things totally go off the radar when you have customized your own
 environment.

 As suggested I have shared this on github :
 https://github.com/devopam/hadoopHA
 apologies if there is any problem on github as I have limited familiarity
 with it :(


 regards
 Devopam



 On Wed, Nov 5, 2014 at 12:31 PM, Nitin Pawar nitinpawar...@gmail.com
 wrote:

 +1
 If you can optionally add hadoop home directory in the script and use
 that in path, it can be used out of the box.

 Also can you share this on github

 On Wed, Nov 5, 2014 at 10:02 AM, Devopam Mittra devo...@gmail.com
 wrote:

 hi All,
 Please find attached a simple shell script to dynamically determine the
 active namenode in the HA Cluster and subsequently run the Hive job / query
 via Talend OS generated workflows.

 It was tried successfully on a HDP2.1 cluster with 2 nn, 7 dn running on
 CentOS 6.5.
 Each ETL job invokes this script first in our framework to derive the NN
 FQDN and then run the hive jobs subsequently to avoid failures.
 Takes a max. of 2 secs to execute (small cost in our case, as compared
 to dealing with a failure and then recalculating the NN to resubmit the
 job).

 Sharing it with you in case you can leverage the same without spending
 effort to code it.

 Do share your feedback/ fixes if you spot any.

 --
 Devopam Mittra
 Life and Relations are not binary




 --
 Nitin Pawar




 --
 Devopam Mittra
 Life and Relations are not binary




-- 
Nitin Pawar


Re: Unix script for identifying current active namenode in a HA cluster

2014-11-04 Thread Nitin Pawar
+1
If you can optionally add hadoop home directory in the script and use that
in path, it can be used out of the box.

Also can you share this on github

On Wed, Nov 5, 2014 at 10:02 AM, Devopam Mittra devo...@gmail.com wrote:

 hi All,
 Please find attached a simple shell script to dynamically determine the
 active namenode in the HA Cluster and subsequently run the Hive job / query
 via Talend OS generated workflows.

 It was tried successfully on a HDP2.1 cluster with 2 nn, 7 dn running on
 CentOS 6.5.
 Each ETL job invokes this script first in our framework to derive the NN
 FQDN and then run the hive jobs subsequently to avoid failures.
 Takes a max. of 2 secs to execute (small cost in our case, as compared to
 dealing with a failure and then recalculating the NN to resubmit the job).

 Sharing it with you in case you can leverage the same without spending
 effort to code it.

 Do share your feedback/ fixes if you spot any.

 --
 Devopam Mittra
 Life and Relations are not binary




-- 
Nitin Pawar


Re: Unix script for identifying current active namenode in a HA cluster

2014-11-04 Thread Muthu Pandi
Good work Devopam Mittra.



*RegardsMuthupandi.K*

 Think before you print.



On Wed, Nov 5, 2014 at 12:31 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:

 +1
 If you can optionally add hadoop home directory in the script and use that
 in path, it can be used out of the box.

 Also can you share this on github

 On Wed, Nov 5, 2014 at 10:02 AM, Devopam Mittra devo...@gmail.com wrote:

 hi All,
 Please find attached a simple shell script to dynamically determine the
 active namenode in the HA Cluster and subsequently run the Hive job / query
 via Talend OS generated workflows.

 It was tried successfully on a HDP2.1 cluster with 2 nn, 7 dn running on
 CentOS 6.5.
 Each ETL job invokes this script first in our framework to derive the NN
 FQDN and then run the hive jobs subsequently to avoid failures.
 Takes a max. of 2 secs to execute (small cost in our case, as compared to
 dealing with a failure and then recalculating the NN to resubmit the job).

 Sharing it with you in case you can leverage the same without spending
 effort to code it.

 Do share your feedback/ fixes if you spot any.

 --
 Devopam Mittra
 Life and Relations are not binary




 --
 Nitin Pawar