Hmm, my bad. NameserviceID is not sync in one of the properties
After fix, it works.
Best Regards,
Raymond Liu
-Original Message-
From: Liu, Raymond [mailto:raymond@intel.com]
Sent: Thursday, October 24, 2013 3:03 PM
To: user@hadoop.apache.org
Subject: How to use Hadoop2 HA's logical name URL?
Hi
I have setting up Hadoop 2.2.0 HA cluster following :
http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html#Configuration_details
And I can check both the active and standby namenode with WEB interface.
While, it seems that the logical name could not be used to access HDFS ?
I have following settings related to HA :
In core-site.xml:
fs.defaultFS
hdfs://public-cluster
dfs.ha.fencing.methods
sshfence
dfs.ha.fencing.ssh.private-key-files
/root/.ssh/id_rsa
dfs.ha.fencing.ssh.connect-timeout
3
And in hdfs-site.xml:
dfs.nameservices
public-cluster
dfs.ha.namenodes.public-cluster
nn1,nn2
dfs.namenode.rpc-address.public-cluster.nn1
10.0.2.31:8020
dfs.namenode.rpc-address.public-cluster.nn2
10.0.2.32:8020
dfs.namenode.http-address.public-cluster.nn1
10.0.2.31:50070
dfs.namenode.http-address.public-cluster.nn2
10.0.2.32:50070
dfs.namenode.shared.edits.dir
qjournal://10.0.0.144:8485;10.0.0.145:8485;10.0.0.146:8485/public-cluster
dfs.journalnode.edits.dir
/mnt/DP_disk1/hadoop2/hdfs/jn
dfs.client.failover.proxy.provider.mycluster
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
---
And then :
./bin/hdfs dfs -fs hdfs://public-cluster -ls /
-ls: java.net.UnknownHostException: public-cluster
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
While if I use the active namenode's URL, it works:
./bin/hdfs dfs -fs hdfs://10.0.2.31:8020 -ls / Found 1 items
drwxr-xr-x - root supergroup 0 2013-10-24 14:30 /tmp
However, shouldn't this hdfs://public-cluster kind of thing works? Anything
that I might miss to make it work? Thanks!
Best Regards,
Raymond Liu