I was more interested in a way to do it programmatically. Found out it today that

Configuration conf =newConfiguration();
conf.addResource(newPath("/etc/hadoop/conf/core-site.xml"));
conf.addResource(newPath("/etc/hadoop/conf/hdfs-site.xml"));
String ns = conf.get("fs.defaultFS");
FileSystem fs = FileSystem.get(conf);

 does what I need without have to care about which namenode is active.

/Magnus

On 2014-12-08 22:18, Andras POTOCZKY wrote:
hi

# sudo -u hdfs hdfs haadmin -getServiceState nn1
active
# sudo -u hdfs hdfs haadmin -getServiceState nn2
standby

Where nn1 and nn2 are the dfs.ha.namenodes.mycluster property values.

This is what you need?

Andras


On 2014.12.08. 21:12, Magnus Runesson wrote:
I develop an application that will access HDFS. Is there a single API to get current active namenode?

I want it be independent of if my cluster has HA NameNode deployed or a single NameNode. The typical Hadoop-client configuration files will be installed on the host.

/Magnus


Reply via email to