How to use Hadoop2 HA's logical name URL?

2013-10-24 Thread Liu, Raymond
Hi

I have setting up Hadoop 2.2.0 HA cluster following : 
http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html#Configuration_details

And I can check both the active and standby namenode with WEB interface.

While, it seems that the logical name could not be used to access HDFS ?

I have following settings related to HA :

In core-site.xml:

property
namefs.defaultFS/name
valuehdfs://public-cluster/value
/property


property
namedfs.ha.fencing.methods/name
valuesshfence/value
/property

property
namedfs.ha.fencing.ssh.private-key-files/name
value/root/.ssh/id_rsa/value
/property

property
namedfs.ha.fencing.ssh.connect-timeout/name
value3/value
/property

And in hdfs-site.xml:

property
  namedfs.nameservices/name
  valuepublic-cluster/value
/property

property
  namedfs.ha.namenodes.public-cluster/name
  valuenn1,nn2/value
/property

property
  namedfs.namenode.rpc-address.public-cluster.nn1/name
  value10.0.2.31:8020/value
/property
property
  namedfs.namenode.rpc-address.public-cluster.nn2/name
  value10.0.2.32:8020/value
/property


property
  namedfs.namenode.http-address.public-cluster.nn1/name
  value10.0.2.31:50070/value
/property
property
  namedfs.namenode.http-address.public-cluster.nn2/name
  value10.0.2.32:50070/value
/property


property
  namedfs.namenode.shared.edits.dir/name
  
valueqjournal://10.0.0.144:8485;10.0.0.145:8485;10.0.0.146:8485/public-cluster/value
/property


property
  namedfs.journalnode.edits.dir/name
  value/mnt/DP_disk1/hadoop2/hdfs/jn/value
/property

property
  namedfs.client.failover.proxy.provider.mycluster/name
  
valueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
/property

---

And then :

./bin/hdfs dfs -fs hdfs://public-cluster -ls /
-ls: java.net.UnknownHostException: public-cluster
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]

While if I use the active namenode's URL, it works:

./bin/hdfs dfs -fs hdfs://10.0.2.31:8020 -ls /
Found 1 items
drwxr-xr-x   - root supergroup  0 2013-10-24 14:30 /tmp

However, shouldn't this hdfs://public-cluster kind of thing works? Anything 
that I might miss to make it work? Thanks!



Best Regards,
Raymond Liu



RE: How to use Hadoop2 HA's logical name URL?

2013-10-24 Thread Liu, Raymond
Hmm, my bad. NameserviceID is not sync in one of the properties
After fix, it works.

Best Regards,
Raymond Liu


-Original Message-
From: Liu, Raymond [mailto:raymond@intel.com] 
Sent: Thursday, October 24, 2013 3:03 PM
To: user@hadoop.apache.org
Subject: How to use Hadoop2 HA's logical name URL?

Hi

I have setting up Hadoop 2.2.0 HA cluster following : 
http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html#Configuration_details

And I can check both the active and standby namenode with WEB interface.

While, it seems that the logical name could not be used to access HDFS ?

I have following settings related to HA :

In core-site.xml:

property
namefs.defaultFS/name
valuehdfs://public-cluster/value
/property


property
namedfs.ha.fencing.methods/name
valuesshfence/value
/property

property
namedfs.ha.fencing.ssh.private-key-files/name
value/root/.ssh/id_rsa/value
/property

property
namedfs.ha.fencing.ssh.connect-timeout/name
value3/value
/property

And in hdfs-site.xml:

property
  namedfs.nameservices/name
  valuepublic-cluster/value
/property

property
  namedfs.ha.namenodes.public-cluster/name
  valuenn1,nn2/value
/property

property
  namedfs.namenode.rpc-address.public-cluster.nn1/name
  value10.0.2.31:8020/value
/property
property
  namedfs.namenode.rpc-address.public-cluster.nn2/name
  value10.0.2.32:8020/value
/property


property
  namedfs.namenode.http-address.public-cluster.nn1/name
  value10.0.2.31:50070/value
/property
property
  namedfs.namenode.http-address.public-cluster.nn2/name
  value10.0.2.32:50070/value
/property


property
  namedfs.namenode.shared.edits.dir/name
  
valueqjournal://10.0.0.144:8485;10.0.0.145:8485;10.0.0.146:8485/public-cluster/value
/property


property
  namedfs.journalnode.edits.dir/name
  value/mnt/DP_disk1/hadoop2/hdfs/jn/value
/property

property
  namedfs.client.failover.proxy.provider.mycluster/name
  
valueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
/property

---

And then :

./bin/hdfs dfs -fs hdfs://public-cluster -ls /
-ls: java.net.UnknownHostException: public-cluster
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]

While if I use the active namenode's URL, it works:

./bin/hdfs dfs -fs hdfs://10.0.2.31:8020 -ls / Found 1 items
drwxr-xr-x   - root supergroup  0 2013-10-24 14:30 /tmp

However, shouldn't this hdfs://public-cluster kind of thing works? Anything 
that I might miss to make it work? Thanks!



Best Regards,
Raymond Liu