[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-12 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318782#comment-14318782
 ] 

Akira AJISAKA commented on HDFS-7684:
-

Looking around the code, I'm thinking it would make more sense to trim the 
value of the parameters in {{NetUtils.createSocketAddr}}. What do you think, 
[~cnauroth] and [~anu]?

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS.7684.001.patch, 
 HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-12 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318882#comment-14318882
 ] 

Chris Nauroth commented on HDFS-7684:
-

Trimming in {{NetUtils}} might be a valuable additional thing to do in its own 
independent patch, but I also think trimming when getting out of the 
{{Configuration}} is helpful.  This can make the code more robust if any logic 
needs to be applied to the configuration value before it gets passed to 
{{NetUtils}}.  One example is {{NameNode#getServiceAddress}}, which needs to 
check for empty string.

Anu, thank you for the patch.  I have just a few minor formatting nitpicks in 
{{TestMalformedURLs}}.  In {{testTryStartingCluster}}, there is a typo 
configration.  Then, in {{tearDown}}, the {{cluster.shutdown()}} line needs 
to be indented by 2 spaces instead of 4.  I'll be +1 after those changes.

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS.7684.001.patch, 
 HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-12 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318905#comment-14318905
 ] 

Akira AJISAKA commented on HDFS-7684:
-

bq. I also think trimming when getting out of the Configuration is helpful. 
This can make the code more robust if any logic needs to be applied to the 
configuration value before it gets passed to NetUtils. 
I agree. Now I have one minor comment: Would you simply use {{assertNotEquals}} 
instead of {{assertTrue}} in the test?
{code}
+assertTrue(config.get(DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY)
+!= 
config.getTrimmed(DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY));
{code}

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS.7684.001.patch, 
 HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-12 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318804#comment-14318804
 ] 

Akira AJISAKA commented on HDFS-7684:
-

I think if the parameters were trimmed in {{NetUtils}}, the problem in 
HADOOP-9869 would be fixed also.

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS.7684.001.patch, 
 HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319042#comment-14319042
 ] 

Anu Engineer commented on HDFS-7684:


Addressed comments from  [~cnauroth] and [~ajisakaa]

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS-7684.004.patch, 
 HDFS.7684.001.patch, HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-12 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318961#comment-14318961
 ] 

Akira AJISAKA commented on HDFS-7684:
-

bq. Trimming in NetUtils might be a valuable additional thing to do in its own 
independent patch
Filed HADOOP-11589.

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS.7684.001.patch, 
 HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319377#comment-14319377
 ] 

Hadoop QA commented on HDFS-7684:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698533/HDFS-7684.004.patch
  against trunk revision 6f5290b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9564//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9564//console

This message is automatically generated.

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS-7684.004.patch, 
 HDFS.7684.001.patch, HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-12 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319386#comment-14319386
 ] 

Akira AJISAKA commented on HDFS-7684:
-

+1. Thanks [~anu] for the contribution, and thanks [~cnauroth] for the review. 
Committing this.

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS-7684.004.patch, 
 HDFS.7684.001.patch, HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316665#comment-14316665
 ] 

Chris Nauroth commented on HDFS-7684:
-

The Findbugs warning is likely unrelated, fixed last night in HDFS-7754.  I 
submitted another Jenkins run:

https://builds.apache.org/job/PreCommit-HDFS-Build/9542/

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS.7684.001.patch, 
 HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316963#comment-14316963
 ] 

Hadoop QA commented on HDFS-7684:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697953/HDFS-7684.003.patch
  against trunk revision 22441ab.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9542//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9542//console

This message is automatically generated.

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS.7684.001.patch, 
 HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14315730#comment-14315730
 ] 

Hadoop QA commented on HDFS-7684:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697953/HDFS-7684.003.patch
  against trunk revision 7c6b654.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9529//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9529//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9529//console

This message is automatically generated.

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS-7684.003.patch, HDFS.7684.001.patch, 
 HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-09 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312535#comment-14312535
 ] 

Anu Engineer commented on HDFS-7684:


The failure does not seem to be related to this change set.

Thanks
Anu


 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS.7684.001.patch, HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313052#comment-14313052
 ] 

Akira AJISAKA commented on HDFS-7684:
-

bq. 2) Fixed the Tabs side
I didn't see the indent was fixed to 2 whitespaces. Would you please update the 
patch?
Also, would you remove some empty lines from the test? One empty line is 
sufficient.
{code}
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+
+
+
+
+
+public class TestMalformedURLs {
{code}

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS.7684.001.patch, HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14310081#comment-14310081
 ] 

Hadoop QA commented on HDFS-7684:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697096/HDFS.7684.001.patch
  against trunk revision eaab959.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  org.apache.hadoop.hdfs.TestLeaseRecovery2

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9464//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9464//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9464//console

This message is automatically generated.

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS.7684.001.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14310181#comment-14310181
 ] 

Akira AJISAKA commented on HDFS-7684:
-

Hi [~anu], thank you for creating the patch. I'm +1 for trimming the values.
bq. -1 release audit. The applied patch generated 1 release audit warnings.
Would you add a license header to TestMalformedURLs.java?
In addition, would you please change the indent to 2 spaces instead of 4 in 
TestMalformedURLs.java?

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS.7684.001.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-02-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14310557#comment-14310557
 ] 

Hadoop QA commented on HDFS-7684:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697202/HDFS.7684.002.patch
  against trunk revision 8de80ff.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.cli.TestHDFSCLI

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9480//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9480//console

This message is automatically generated.

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer
 Attachments: HDFS.7684.001.patch, HDFS.7684.002.patch


 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-01-30 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299229#comment-14299229
 ] 

Xiaoyu Yao commented on HDFS-7684:
--

Thanks [~tianyin] for reporting this. The one that you hit can be fixed by 
changing the conf.get to conf.getTrimmed.

{code}
final String httpsAddrString = conf.get(
DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTPS_ADDRESS_KEY,
DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTPS_ADDRESS_DEFAULT);
InetSocketAddress httpsAddr = NetUtils.createSocketAddr(httpsAddrString);
{code}

Searched the call of NetUtils.createSocketAddr() in HDFS code, I found many 
other places with similar untrimmed host:port issues. For example in 
DataNodeManager#DataNodeManager() below. I think we should fix them as well 
with this JIRA.

{code}
this.defaultXferPort = NetUtils.createSocketAddr(
  conf.get(DFSConfigKeys.DFS_DATANODE_ADDRESS_KEY,
  DFSConfigKeys.DFS_DATANODE_ADDRESS_DEFAULT)).getPort();
this.defaultInfoPort = NetUtils.createSocketAddr(
  conf.get(DFSConfigKeys.DFS_DATANODE_HTTP_ADDRESS_KEY,
  DFSConfigKeys.DFS_DATANODE_HTTP_ADDRESS_DEFAULT)).getPort();
this.defaultInfoSecurePort = NetUtils.createSocketAddr(
conf.get(DFSConfigKeys.DFS_DATANODE_HTTPS_ADDRESS_KEY,
DFSConfigKeys.DFS_DATANODE_HTTPS_ADDRESS_DEFAULT)).getPort();
this.defaultIpcPort = NetUtils.createSocketAddr(
  conf.get(DFSConfigKeys.DFS_DATANODE_IPC_ADDRESS_KEY,
  DFSConfigKeys.DFS_DATANODE_IPC_ADDRESS_DEFAULT)).getPort();
{code}

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer

 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-01-30 Thread Tianyin Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299233#comment-14299233
 ] 

Tianyin Xu commented on HDFS-7684:
--

Yes, exactly. It seems that Hadoop has a bunch of such trimming issues that 
bothered a number of users...

Thanks, Xiaoyu!

~t

 The host:port settings of dfs.namenode.secondary.http-address should be 
 trimmed before use
 --

 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1, 2.5.1
Reporter: Tianyin Xu
Assignee: Anu Engineer

 With the following setting,
 property
 namedfs.namenode.secondary.http-address/name
 valuemyhostname:50090 /value
 /property
 The secondary NameNode could not be started
 $ hadoop-daemon.sh start secondarynamenode
 starting secondarynamenode, logging to 
 /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
 /home/hadoop/hadoop-2.4.1/bin/hdfs
 Exception in thread main java.lang.IllegalArgumentException: Does not 
 contain a valid host:port authority: myhostname:50090
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)
 We were really confused and misled by the log message: we thought about the 
 DNS problems (changed to IP address but no success) and the network problem 
 (tried to test the connections with no success...)
 It turned out to be that the setting is not trimmed and the additional space 
 character in the end of the setting caused the problem... OMG!!!...
 Searching on the Internet, we find we are really not alone.  So many users 
 encountered similar trim problems! The following lists a few:
 http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
 http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
 https://issues.apache.org/jira/browse/HDFS-2799
 https://issues.apache.org/jira/browse/HBASE-6973



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)