[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used
[ https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17235106#comment-17235106 ] Daniel Howard commented on HDFS-12109: -- PS: thank you, [~luigidifraia] for documenting this issue and [~surendrasingh] for the suggested fix. I am setting up HA right now and I committed the same error copy-paste {{dfs.client.failover.proxy.provider.mycluster}} into my configuration! > "fs" java.net.UnknownHostException when HA NameNode is used > --- > > Key: HDFS-12109 > URL: https://issues.apache.org/jira/browse/HDFS-12109 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 > Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release > CentOS Linux release 7.3.1611 (Core) > [hadoop@namenode01 ~]$ uname -a > Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC > 2017 x86_64 x86_64 x86_64 GNU/Linux > [hadoop@namenode01 ~]$ java -version > java version "1.8.0_131" > Java(TM) SE Runtime Environment (build 1.8.0_131-b11) > Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) >Reporter: Luigi Di Fraia >Priority: Major > > After setting up an HA NameNode configuration, the following invocation of > "fs" fails: > [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls / > -ls: java.net.UnknownHostException: saccluster > It works if properties are defined as per below: > /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster > -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 > -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 > -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls / > These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as > per below: > > dfs.nameservices > saccluster > > > dfs.ha.namenodes.saccluster > namenode01,namenode02 > > > dfs.namenode.rpc-address.saccluster.namenode01 > namenode01:8020 > > > dfs.namenode.rpc-address.saccluster.namenode02 > namenode02:8020 > > > dfs.namenode.http-address.saccluster.namenode01 > namenode01:50070 > > > dfs.namenode.http-address.saccluster.namenode02 > namenode02:50070 > > > dfs.namenode.shared.edits.dir > > qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster > > > dfs.client.failover.proxy.provider.mycluster > > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > > In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as > per below: > > fs.defaultFS > hdfs://saccluster > > In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined: > export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop" > Is "fs" trying to read these properties from somewhere else, such as a > separate client configuration file? > Apologies if I am missing something obvious here. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used
[ https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083520#comment-16083520 ] Luigi Di Fraia commented on HDFS-12109: --- Thanks [~surendrasingh]. Appreciate your help with this. Indeed it was the property name that was using the wrong namespace. > "fs" java.net.UnknownHostException when HA NameNode is used > --- > > Key: HDFS-12109 > URL: https://issues.apache.org/jira/browse/HDFS-12109 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 > Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release > CentOS Linux release 7.3.1611 (Core) > [hadoop@namenode01 ~]$ uname -a > Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC > 2017 x86_64 x86_64 x86_64 GNU/Linux > [hadoop@namenode01 ~]$ java -version > java version "1.8.0_131" > Java(TM) SE Runtime Environment (build 1.8.0_131-b11) > Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) >Reporter: Luigi Di Fraia > > After setting up an HA NameNode configuration, the following invocation of > "fs" fails: > [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls / > -ls: java.net.UnknownHostException: saccluster > It works if properties are defined as per below: > /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster > -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 > -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 > -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls / > These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as > per below: > > dfs.nameservices > saccluster > > > dfs.ha.namenodes.saccluster > namenode01,namenode02 > > > dfs.namenode.rpc-address.saccluster.namenode01 > namenode01:8020 > > > dfs.namenode.rpc-address.saccluster.namenode02 > namenode02:8020 > > > dfs.namenode.http-address.saccluster.namenode01 > namenode01:50070 > > > dfs.namenode.http-address.saccluster.namenode02 > namenode02:50070 > > > dfs.namenode.shared.edits.dir > > qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster > > > dfs.client.failover.proxy.provider.mycluster > > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > > In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as > per below: > > fs.defaultFS > hdfs://saccluster > > In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined: > export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop" > Is "fs" trying to read these properties from somewhere else, such as a > separate client configuration file? > Apologies if I am missing something obvious here. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used
[ https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083409#comment-16083409 ] Surendra Singh Lilhore commented on HDFS-12109: --- [~luigidifraia] based on description, I think you configured one property wrongly. {code} dfs.client.failover.proxy.provider.mycluster org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider {code} In this property, you given wrong namespace name(*mycluster*). It should be *saccluster*. So your configuration should be like this.. {code} dfs.client.failover.proxy.provider.saccluster org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider {code} > "fs" java.net.UnknownHostException when HA NameNode is used > --- > > Key: HDFS-12109 > URL: https://issues.apache.org/jira/browse/HDFS-12109 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 > Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release > CentOS Linux release 7.3.1611 (Core) > [hadoop@namenode01 ~]$ uname -a > Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC > 2017 x86_64 x86_64 x86_64 GNU/Linux > [hadoop@namenode01 ~]$ java -version > java version "1.8.0_131" > Java(TM) SE Runtime Environment (build 1.8.0_131-b11) > Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) >Reporter: Luigi Di Fraia > > After setting up an HA NameNode configuration, the following invocation of > "fs" fails: > [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls / > -ls: java.net.UnknownHostException: saccluster > It works if properties are defined as per below: > /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster > -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 > -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 > -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls / > These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as > per below: > > dfs.nameservices > saccluster > > > dfs.ha.namenodes.saccluster > namenode01,namenode02 > > > dfs.namenode.rpc-address.saccluster.namenode01 > namenode01:8020 > > > dfs.namenode.rpc-address.saccluster.namenode02 > namenode02:8020 > > > dfs.namenode.http-address.saccluster.namenode01 > namenode01:50070 > > > dfs.namenode.http-address.saccluster.namenode02 > namenode02:50070 > > > dfs.namenode.shared.edits.dir > > qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster > > > dfs.client.failover.proxy.provider.mycluster > > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > > In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as > per below: > > fs.defaultFS > hdfs://saccluster > > In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined: > export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop" > Is "fs" trying to read these properties from somewhere else, such as a > separate client configuration file? > Apologies if I am missing something obvious here. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used
[ https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081872#comment-16081872 ] Luigi Di Fraia commented on HDFS-12109: --- It's also probably worth mentioning that I am trying to use the HA NameNode setup with Accumulo 1.8.1 and I am having the same problem there (namenode service being used as if it were a hostname in a non-HA NameNode setup) when I try to init Accumulo or show volumes, as per below: accumulo@namenode01 ~]$ /usr/local/accumulo/bin/accumulo admin volumes --list 2017-07-11 09:24:52,380 [start.Main] ERROR: Problem initializing the class loader java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.accumulo.start.Main.getClassLoader(Main.java:94) at org.apache.accumulo.start.Main.main(Main.java:47) Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: saccluster at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:417) at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:130) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:343) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:287) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:156) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:181) at org.apache.commons.vfs2.provider.hdfs.HdfsFileSystem.resolveFile(HdfsFileSystem.java:164) at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:84) at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:64) at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:804) at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:760) at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:709) at org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.resolve(AccumuloVFSClassLoader.java:141) at org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.resolve(AccumuloVFSClassLoader.java:121) at org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.getClassLoader(AccumuloVFSClassLoader.java:211) It was due to the above exception that I then went back one step and tried file-system commands for HDFS directly. The Web UI for NameNodes on the active NameNode (http://namenode01:50070/dfshealth.html#tab-overview) is picking up the HA NameNode configuration just fine and showing the Namespace as expected, saccluster, As a side note, without HA NameNode the setup has been working just fine for me for quite some time, including using Accumulo with HDFS. It seems like there's something missing in the way HA NameNode properties are exposed. > "fs" java.net.UnknownHostException when HA NameNode is used > --- > > Key: HDFS-12109 > URL: https://issues.apache.org/jira/browse/HDFS-12109 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 > Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release > CentOS Linux release 7.3.1611 (Core) > [hadoop@namenode01 ~]$ uname -a > Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC > 2017 x86_64 x86_64 x86_64 GNU/Linux > [hadoop@namenode01 ~]$ java -version > java version "1.8.0_131" > Java(TM) SE Runtime Environment (build 1.8.0_131-b11) > Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) >Reporter: Luigi Di Fraia > > After setting up an HA NameNode configuration, the following invocation of > "fs" fails: > [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls / > -ls: java.net.UnknownHostException: saccluster > It works if properties are defined as per below: > /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster >
[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used
[ https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081740#comment-16081740 ] Luigi Di Fraia commented on HDFS-12109: --- Thanks for your reply [~aw]. I exported the variables as per below for testing purposes: [hadoop@namenode01 ~]$ export HADOOP_PREFIX=/usr/local/hadoop [hadoop@namenode01 ~]$ export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop However the issue persists. What I'd like to underline is that part of the configuration seems to be visible to file-system tools, based on the exception I get: [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls / -ls: java.net.UnknownHostException: saccluster Indeed "saccluster" is the nameservice I had configured and the default FS. > "fs" java.net.UnknownHostException when HA NameNode is used > --- > > Key: HDFS-12109 > URL: https://issues.apache.org/jira/browse/HDFS-12109 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 > Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release > CentOS Linux release 7.3.1611 (Core) > [hadoop@namenode01 ~]$ uname -a > Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC > 2017 x86_64 x86_64 x86_64 GNU/Linux > [hadoop@namenode01 ~]$ java -version > java version "1.8.0_131" > Java(TM) SE Runtime Environment (build 1.8.0_131-b11) > Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) >Reporter: Luigi Di Fraia > > After setting up an HA NameNode configuration, the following invocation of > "fs" fails: > [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls / > -ls: java.net.UnknownHostException: saccluster > It works if properties are defined as per below: > /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster > -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 > -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 > -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls / > These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as > per below: > > dfs.nameservices > saccluster > > > dfs.ha.namenodes.saccluster > namenode01,namenode02 > > > dfs.namenode.rpc-address.saccluster.namenode01 > namenode01:8020 > > > dfs.namenode.rpc-address.saccluster.namenode02 > namenode02:8020 > > > dfs.namenode.http-address.saccluster.namenode01 > namenode01:50070 > > > dfs.namenode.http-address.saccluster.namenode02 > namenode02:50070 > > > dfs.namenode.shared.edits.dir > > qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster > > > dfs.client.failover.proxy.provider.mycluster > > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > > In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as > per below: > > fs.defaultFS > hdfs://saccluster > > In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined: > export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop" > Is "fs" trying to read these properties from somewhere else, such as a > separate client configuration file? > Apologies if I am missing something obvious here. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used
[ https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080591#comment-16080591 ] Allen Wittenauer commented on HDFS-12109: - The HADOOP_CONF_DIR environment variable is how the shell scripts find where hadoop-env.sh is located. Given what I can imply from your description, hadoop 3.x would work fine because it can autodetermine where stuff is located based upon the executable location. But hadoop 2.x has a lot of bugs, so it needs to have (minimally) HADOOP_PREFIX defined outside of the shell script code. If that is defined, it should know where everything is located, including auto-defining HADOOP_CONF_DIR to be HADOOP_PREFIX/etc/hadoop. > "fs" java.net.UnknownHostException when HA NameNode is used > --- > > Key: HDFS-12109 > URL: https://issues.apache.org/jira/browse/HDFS-12109 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 > Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release > CentOS Linux release 7.3.1611 (Core) > [hadoop@namenode01 ~]$ uname -a > Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC > 2017 x86_64 x86_64 x86_64 GNU/Linux > [hadoop@namenode01 ~]$ java -version > java version "1.8.0_131" > Java(TM) SE Runtime Environment (build 1.8.0_131-b11) > Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) >Reporter: Luigi Di Fraia > > After setting up an HA NameNode configuration, the following invocation of > "fs" fails: > [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls / > -ls: java.net.UnknownHostException: saccluster > It works if properties are defined as per below: > /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster > -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 > -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 > -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls / > These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as > per below: > > dfs.nameservices > saccluster > > > dfs.ha.namenodes.saccluster > namenode01,namenode02 > > > dfs.namenode.rpc-address.saccluster.namenode01 > namenode01:8020 > > > dfs.namenode.rpc-address.saccluster.namenode02 > namenode02:8020 > > > dfs.namenode.http-address.saccluster.namenode01 > namenode01:50070 > > > dfs.namenode.http-address.saccluster.namenode02 > namenode02:50070 > > > dfs.namenode.shared.edits.dir > > qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster > > > dfs.client.failover.proxy.provider.mycluster > > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > > In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as > per below: > > fs.defaultFS > hdfs://saccluster > > In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined: > export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop" > Is "fs" trying to read these properties from somewhere else, such as a > separate client configuration file? > Apologies if I am missing something obvious here. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org