Using distcp with Hadoop HA

2013-01-29 Thread Dhaval Shah
Hello everyone.. I am trying to use distcp with Hadoop HA configuration (using 
CDH4.0.0 at the moment).. Here is my problem:
- I am trying to do a distcp from cluster A to cluster B. Since no operations 
are supported on the standby namenode, I need to specify either the active 
namenode while using distcp or use the failover proxy provider 
(dfs.client.failover.proxy.provider.clusterA) where I can specify the two 
namenodes for cluster B and the failover code inside HDFS will figure it out.. 
- If I use the failover proxy provider, some of my datanodes on cluster A would 
connect to the namenode on cluster B and vice versa. I am assuming that is 
because I have configured both nameservices in my hdfs-site.xml for distcp to 
work.. I have configured dfs.nameservice.id to be the right one but the 
datanodes do not seem to respect that. 

What is the best way to use distcp with Hadoop HA configuration without having 
the datanodes to connect to the remote namenode? Thanks
 
Regards,
Dhaval

Re: Using distcp with Hadoop HA

2013-01-29 Thread Suresh Srinivas
Currently, as you have pointed out, client side configuration based
failover is used in HA setup. The configuration must define namenode
addresses  for the nameservices of both the clusters. Are the datanodes
belonging to the two clusters running on the same set of nodes? Can you
share the configuration you are using, to diagnose the problem?

- I am trying to do a distcp from cluster A to cluster B. Since no
 operations are supported on the standby namenode, I need to specify either
 the active namenode while using distcp or use the failover proxy provider
 (dfs.client.failover.proxy.provider.clusterA) where I can specify the two
 namenodes for cluster B and the failover code inside HDFS will figure it
 out.



 - If I use the failover proxy provider, some of my datanodes on cluster A
 would connect to the namenode on cluster B and vice versa. I am assuming
 that is because I have configured both nameservices in my hdfs-site.xml for
 distcp to work.. I have configured dfs.nameservice.id to be the right one
 but the datanodes do not seem to respect that.

 What is the best way to use distcp with Hadoop HA configuration without
 having the datanodes to connect to the remote namenode? Thanks

 Regards,
 Dhaval




-- 
http://hortonworks.com/download/


Re: Using distcp with Hadoop HA

2013-01-29 Thread Dhaval Shah
No the datanodes are running on different sets of machines. The configuration 
looks like this:
The problem is that datanodes in clusterA are trying to connect to namenodes in 
clusterB (and this seems random.. like it trying to randomly select from the 4 
namenodes)

property
namedfs.nameservices/name
valueclusterA,clusterB/value
description
Comma-separated list of nameservices.
/description
finaltrue/final
/property
property
namedfs.nameservice.id/name
valueclusterA/value
description
The ID of this nameservice. If the nameservice ID is not
configured or more than one nameservice is configured for
dfs.nameservices it is determined automatically by
matching the local node's address with the configured address.
/description
finaltrue/final
/property
property
namedfs.ha.namenodes.clusterA/name
valueclusterAnn1,clusterAnn2/value
description
The prefix for a given nameservice, contains a comma-separated
list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).
/description
finaltrue/final
/property
property
namedfs.namenode.rpc-address.clusterA.clusterAnn1/name
valueclusterAnn1:8000/value
description
Set the full address and IPC port of the NameNode process
/description
finaltrue/final
/property
property
namedfs.namenode.rpc-address.clusterA.clusterAnn2/name
valueclusterAnn1:8000/value
description
Set the full address and IPC port of the NameNode process
/description
finaltrue/final
/property
property
namedfs.ha.namenodes.clusterB/name
valueclusterBnn1,clusterBnn2/value
description
The prefix for a given nameservice, contains a comma-separated
list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).
/description
finaltrue/final
/property
property
namedfs.namenode.rpc-address.clusterB.clusterBnn1/name
valueclusterBnn1:8000/value
description
Set the full address and IPC port of the NameNode process
/description
finaltrue/final
/property
property
namedfs.namenode.rpc-address.clusterB.clusterBnn2/name
valueclusterBnn2:8000/value
description
Set the full address and IPC port of the NameNode process
/description
finaltrue/final
/property
property
namedfs.client.failover.proxy.provider.clusterA/name
valueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
description
Configure the name of the Java class which the DFS Client will 
use to determine which NameNode is the current Active, 
and therefore which NameNode is currently serving client requests.
/description
finaltrue/final
/property
property
namedfs.client.failover.proxy.provider.clusterB/name
valueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
description
Configure the name of the Java class which the DFS Client will 
use to determine which NameNode is the current Active, 
and therefore which NameNode is currently serving client requests.
/description
finaltrue/final
/property
 
Regards,
Dhaval



 From: Suresh Srinivas sur...@hortonworks.com
To: hdfs-u...@hadoop.apache.org user@hadoop.apache.org; Dhaval Shah 
prince_mithi...@yahoo.co.in 
Sent: Tuesday, 29 January 2013 6:03 PM
Subject: Re: Using distcp with Hadoop HA
 

Currently, as you have pointed out, client side configuration based failover is 
used in HA setup. The configuration must define namenode addresses  for the 
nameservices of both the clusters. Are the datanodes belonging to the two 
clusters running on the same set of nodes? Can you share the configuration you 
are using, to diagnose the problem? 

- I am trying to do a distcp from cluster A to cluster B. Since no operations 
are supported on the standby namenode, I need to specify either the active 
namenode while using distcp or use the failover proxy provider 
(dfs.client.failover.proxy.provider.clusterA) where I can specify the two 
namenodes for cluster B and the failover code inside HDFS will figure it out.
 
- If I use the failover proxy provider, some of my datanodes on cluster A would 
connect to the namenode on cluster B and vice versa. I am assuming that is 
because I have configured both nameservices in my hdfs-site.xml for distcp to 
work.. I have configured dfs.nameservice.id to be the right one but the 
datanodes do not seem to respect that. 


What is the best way to use distcp with Hadoop HA configuration without having 
the datanodes to connect to the remote namenode? Thanks
 
Regards,
Dhaval


-- 
http://hortonworks.com/download/