[jira] [Updated] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Luigi Di Fraia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luigi Di Fraia updated HDFS-12109:
--
Description: 
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as per 
below:


fs.defaultFS
hdfs://saccluster


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I am missing something obvious here.

  was:
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as per 
below:


fs.defaultFS
hdfs://saccluster


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.


> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" 

[jira] [Updated] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Luigi Di Fraia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luigi Di Fraia updated HDFS-12109:
--
Description: 
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as per 
below:


fs.defaultFS
hdfs://saccluster


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.

  was:
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.


> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are