Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP
address only as per company policy, so that original IP addresses are not
shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

> Are you sure that you are starting in same machine (master)..?
>
>
>
> Please share “/etc/hosts” and configuration files..
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
> *Sent:* 27 April 2017 17:18
> *To:* user@hadoop.apache.org
> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Hello
>
>
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
>
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
>
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
>
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>
>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
>
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
>
>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
>
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
>
>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
> init>(NameNodeRpcServer.java:345)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createRpcServer(NameNode.java:674)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
> NameNode.java:647)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
> NameNode.java:812)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
> NameNode.java:796)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createNameNode(NameNode.java:1493)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
> NameNode.java:1559)
>
> Caused by: java.net.BindException: Cannot assign requested address
>
>         at sun.nio.ch.Net.bind0(Native Method)
>
>         at sun.nio.ch.Net.bind(Net.java:433)
>
>         at sun.nio.ch.Net.bind(Net.java:425)
>
>         at sun.nio.ch.ServerSocketChannelImpl.bind(
> ServerSocketChannelImpl.java:223)
>
>         at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
>
>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>
>         ... 13 more
>
> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
>
> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> SHUTDOWN_MSG:
>
> /************************************************************
>
> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>
> ************************************************************/
>
>
>
>
>
>
>
> I have changed the port number multiple times, every time I get the same
> error. How do I get past this?
>
>
>
>
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://1.1.1.1:51150</value>
    </property>
</configuration>

Attachment: hadoop-env.sh
Description: Bourne shell script

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/mnt/hadoop_store/datanode</value>
    </property>
    <property>
        <name>dfs.datanode.name.dir</name>
        <value>file:/mnt/hadoop_store/namenode</value>
    </property>

</configuration>

Attachment: hosts
Description: Binary data

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

Attachment: slaves
Description: Binary data

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>1.1.1.1:8025</value>
    </property>
	<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>1.1.1.1:8030</value>
    </property>
	<property>
        <name>yarn.resourcemanager.address</name>
        <value>1.1.1.1:8050</value>
    </property>
	<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
	<property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
<property>
        <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
        <value>99.5</value>
</property>
<property>
        <name>yarn.nodemanager.disk-health-checker.min_healthy_disks</name>
        <value>0</value>
</property>

</configuration>
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org

Reply via email to