[ https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16080591#comment-16080591 ]
Allen Wittenauer commented on HDFS-12109: ----------------------------------------- The HADOOP_CONF_DIR environment variable is how the shell scripts find where hadoop-env.sh is located. Given what I can imply from your description, hadoop 3.x would work fine because it can autodetermine where stuff is located based upon the executable location. But hadoop 2.x has a lot of bugs, so it needs to have (minimally) HADOOP_PREFIX defined outside of the shell script code. If that is defined, it should know where everything is located, including auto-defining HADOOP_CONF_DIR to be HADOOP_PREFIX/etc/hadoop. > "fs" java.net.UnknownHostException when HA NameNode is used > ----------------------------------------------------------- > > Key: HDFS-12109 > URL: https://issues.apache.org/jira/browse/HDFS-12109 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs > Affects Versions: 2.8.0 > Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release > CentOS Linux release 7.3.1611 (Core) > [hadoop@namenode01 ~]$ uname -a > Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC > 2017 x86_64 x86_64 x86_64 GNU/Linux > [hadoop@namenode01 ~]$ java -version > java version "1.8.0_131" > Java(TM) SE Runtime Environment (build 1.8.0_131-b11) > Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) > Reporter: Luigi Di Fraia > > After setting up an HA NameNode configuration, the following invocation of > "fs" fails: > [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls / > -ls: java.net.UnknownHostException: saccluster > It works if properties are defined as per below: > /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster > -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 > -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 > -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls / > These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as > per below: > <property> > <name>dfs.nameservices</name> > <value>saccluster</value> > </property> > <property> > <name>dfs.ha.namenodes.saccluster</name> > <value>namenode01,namenode02</value> > </property> > <property> > <name>dfs.namenode.rpc-address.saccluster.namenode01</name> > <value>namenode01:8020</value> > </property> > <property> > <name>dfs.namenode.rpc-address.saccluster.namenode02</name> > <value>namenode02:8020</value> > </property> > <property> > <name>dfs.namenode.http-address.saccluster.namenode01</name> > <value>namenode01:50070</value> > </property> > <property> > <name>dfs.namenode.http-address.saccluster.namenode02</name> > <value>namenode02:50070</value> > </property> > <property> > <name>dfs.namenode.shared.edits.dir</name> > > <value>qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster</value> > </property> > <property> > <name>dfs.client.failover.proxy.provider.mycluster</name> > > <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> > </property> > In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as > per below: > <property> > <name>fs.defaultFS</name> > <value>hdfs://saccluster</value> > </property> > In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined: > export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop" > Is "fs" trying to read these properties from somewhere else, such as a > separate client configuration file? > Apologies if I am missing something obvious here. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org