[ 
https://issues.apache.org/jira/browse/HDFS-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2893:
------------------------------

    Description: 
HDFS-1703 changed the behavior of the start/stop scripts so that the masters 
file is no longer used to indicate which hosts to start the 2NN on. The 2NN is 
now started, when using start-dfs.sh, on hosts only when 
dfs.namenode.secondary.http-address is configured with a non-wildcard IP. This 
means you can not start a NN using an http-address specified using a wildcard 
IP. We should allow a 2NN to be started with the default config, ie 
start-dfs.sh should start a NN, 2NN and DN. The packaging already works this 
way (it doesn't use start-dfs.sh, it uses hadoop-daemon.sh directly w/o first 
checking getconf) so let's bring start-dfs.sh in line with this behavior.



  was:
Looks like DFSUtil address matching doesn't find a match if the http-address is 
specified using a wildcard IP and a port. It should return 0.0.0.0:50090 in 
this case which would allow the 2NN to start.

Also, unless http-address is explicitly configured in hdfs-site.xml the 2NN 
will not start, since DFSUtil#getSecondaryNameNodeAddresses does not use the 
default value as a fallback. That may be confusing to people who expect the 
default value to be used.

{noformat}
hadoop-0.23.1-SNAPSHOT $ cat /home/eli/hadoop/conf3/hdfs-site.xml
...
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>0.0.0.0:50090</value>
  </property>
</configuration>

hadoop-0.23.1-SNAPSHOT $ ./bin/hdfs --config ~/hadoop/conf3 getconf 
-secondarynamenodes
0.0.0.0
hadoop-0.23.1-SNAPSHOT $ ./sbin/start-dfs.sh 
Starting namenodes on [localhost]
localhost: starting namenode, logging to 
/home/eli/hadoop/dirs3/logs/eli/hadoop-eli-namenode-eli-thinkpad.out
localhost: starting datanode, logging to 
/home/eli/hadoop/dirs3/logs/eli/hadoop-eli-datanode-eli-thinkpad.out
Secondary namenodes are not configured.  Cannot start secondary namenodes.
{noformat}

This works if eg localhost:50090 is used.

We should also update the hdfs user guide to remove the reference to the 
masters file since it's no longer used to configure which hosts the 2NN runs on.

       Assignee: Eli Collins
        Summary: start-dfs.sh won't start the 2NN if 
dfs.namenode.secondary.http-address is default or specified with a wildcard IP  
(was: The 2NN won't start if dfs.namenode.secondary.http-address is default or 
specified with a wildcard IP and port)
    
> start-dfs.sh won't start the 2NN if dfs.namenode.secondary.http-address is 
> default or specified with a wildcard IP
> ------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-2893
>                 URL: https://issues.apache.org/jira/browse/HDFS-2893
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.23.1
>            Reporter: Eli Collins
>            Assignee: Eli Collins
>            Priority: Critical
>
> HDFS-1703 changed the behavior of the start/stop scripts so that the masters 
> file is no longer used to indicate which hosts to start the 2NN on. The 2NN 
> is now started, when using start-dfs.sh, on hosts only when 
> dfs.namenode.secondary.http-address is configured with a non-wildcard IP. 
> This means you can not start a NN using an http-address specified using a 
> wildcard IP. We should allow a 2NN to be started with the default config, ie 
> start-dfs.sh should start a NN, 2NN and DN. The packaging already works this 
> way (it doesn't use start-dfs.sh, it uses hadoop-daemon.sh directly w/o first 
> checking getconf) so let's bring start-dfs.sh in line with this behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to