Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Bhushan Pathak
Hello All,

1. The slave & master can ping each other as well as use passwordless SSH
2. The actual IP starts with 10.x.x.x, I have put in the config file as I
cannot share  the actual IP
3. The namenode is formatted. I executed the 'hdfs namenode -format' again
just to rule out the possibility
4. I did not configure anything in the master file. I don;t think Hadoop
2.7.3 has a master file to be configured
5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not give
any output.

Even if I change  the port number to a different one, say 52220, 5, I
still get the same error.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao  wrote:

> Hi Mr. Bhushan,
>
> Have you tried to format namenode?
> Here's the command:
> hdfs namenode -format
>
> I've encountered such problem as namenode cannot be started. This command
> line easily fixed my problem.
>
> Hope this can help you.
>
> Sincerely,
> Lei Cao
>
>
> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
> *Please check “hostname –i” .*
>
>
>
>
>
> *1)  **What’s configured in the “master” file.(you shared only slave
> file).?*
>
>
>
> *2)  **Can you able to “ping master”?*
>
>
>
> *3)  **Can you configure like this check once..?*
>
> *1.1.1.1 master*
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com
> ]
> *Sent:* 27 April 2017 18:16
> *To:* Brahma Reddy Battula
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Some additional info -
>
> OS: CentOS 7
>
> RAM: 8GB
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
> Thanks
>
> Bhushan Pathak
>
>
>
> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
> bhushan.patha...@gmail.com> wrote:
>
> Yes, I'm running the command on the master node.
>
>
>
> Attached are the config files & the hosts file. I have updated the IP
> address only as per company policy, so that original IP addresses are not
> shared.
>
>
>
> The same config files & hosts file exist on all 3 nodes.
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
> Thanks
>
> Bhushan Pathak
>
>
>
> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
> Are you sure that you are starting in same machine (master)..?
>
>
>
> Please share “/etc/hosts” and configuration files..
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
> *Sent:* 27 April 2017 17:18
> *To:* user@hadoop.apache.org
> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Hello
>
>
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
>
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
>
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
>
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
>
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
>
> at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
>
> at org.apache.hadoop.ipc.Server.(Server.java:2215)
>
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
>
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
>
> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
>
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
> init>(NameNodeRpcServer.java:345)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createRpcServer(NameNode.java:674)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
> NameNode.java:647)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(
> NameNode.java:812)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(
> NameNode.java:796)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.
> 

Re: HDFS HA(Based on QJM) Failover Frequently with Large FSimage andBusy Requests

2017-04-27 Thread Chackravarthy Esakkimuthu
Client failures due to failover gets handled seamlessly by having retries,
so need not worry about that.

And by increasing ha.health-monitor.rpc-timeout.ms to a slightly larger
value, you are just avoiding unnecessary failover when namenode busy
processing other client/service requests. This will get into effect only
when namenode is busy and not able to process zkfc rpc calls and other
times when active namenode shutdown for some reason, failover will be
instant and it will not wait for this much configured time.

On Thu, Apr 27, 2017 at 5:46 PM,  wrote:

> 1. Is service-rpc configured in namenode?
>
> *Not yet, I was considered to configure servicerpc, but I was thinking
> about the possible disadvantages as well. *
>
> *When failover is happened  because of too many waiting rpcs, if zkfc gets
> normal process from another port, is it possiable that the clients get a
> lot of failures?*
>
>
> 2. ha.health-monitor.rpc-timeout.ms - Also consider increasing zkfc rpc
> call timeout to namenode.
>
> *The same worry,  is it possiable that the clients get a lot of failures?*
>
>
> Thanks very much,
>
> Doris
>
>
>
> 
> ---
>
>
> 1. Is service-rpc configured in namenode?
> (dfs.namenode.servicerpc-address - this will create another RPC server
> listening on another port (say 8021) to handle all service (non-client)
> requests and hence default rpc address (say 8020) will handle only client
> requests.)
>
> By doing this way, you would be able to decouple client and service
> requests. Here service requests corresponds to rpc calls from DN, ZKFC etc.
> Hence when cluster is too busy because of too many client operations, ZKFC
> requests will get processed by different rpc and hence need not wait in
> same queue as client requests.)
>
> 2. ha.health-monitor.rpc-timeout.ms - Also consider increasing zkfc rpc
> call timeout to namenode.
>
> By default this is 45 secs. You can consider increasing it to 1 or 2 mins
> depending upon your cluster usage.
>
> Thanks,
> Chackra
>
> On Wed, Apr 26, 2017 at 11:50 AM,  <gu.yiz...@zte.com.cn> wrote:
>
>>
>> *Hi All,*
>>
>> HDFS HA (Based on QJM) , 5 journalnodes, Apache 2.5.0 on Redhat 6.5
>> with JDK1.7.
>>
>> Put 1P+ data into HDFS with FSimage about 10G, then keep on making
>> more requests to this HDFS, namenodes failover frequently. Wanna to know
>> something as follows:
>>
>>
>>  *   1.ANN(active namenode) downloading fsimage.ckpt_* from SNN(standby
>> namenode) leads to very high disk io, at the same time, zkfc fails to
>> monitor the health of ann due to timeout. Is there any releationship
>> between high disk io and zkfc monitor request timeout? Every failover
>> happened when ckpt download, but not every ckpt download leads to failover.*
>>
>>
>>
>> 2017-03-15 09:27:05,750 WARN org.apache.hadoop.ha.HealthMonitor:
>> Transport-level exception trying to monitor health of NameNode at
>> nn1/ip:8020: Call From nn1/ip to nn1:8020 failed on socket timeout
>> exception: java.net.SocketTimeoutException: 45000 millis timeout while
>> waiting for channel to be ready for read. ch :
>> java.nio.channels.SocketChannel[connected local=/ip:48536
>> remote=nn1/ip:8020]; For more details see:  http://wiki.apache.org/hadoop
>> /SocketTimeout
>>
>> 2017-03-15 09:27:05,750 INFO org.apache.hadoop.ha.HealthMonitor:
>> Entering state SERVICE_NOT_RESPONDING
>>
>>
>> *2.Due to SERVICE_NOT_RESPONDING, another zkfc fences the old
>> ann(configed sshfence), before restart by my additional monitor, old ann
>> log sometimes shows like this, what is "Rescan of
>> postponedMisreplicatedBlocks"? Does this have any reletionships with
>> failover?*
>>
>> 2017-03-15 04:36:00,866 INFO org.apache.hadoop.hdfs.server.
>> blockmanagement.CacheReplicationMonitor: Rescanning after 3
>> milliseconds
>>
>> 2017-03-15 04:36:00,931 INFO org.apache.hadoop.hdfs.server.
>> blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0
>> block(s) in 65 millisecond(s).
>>
>> 2017-03-15 04:36:01,127 INFO 
>> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 23 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:04,145 INFO 
>> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 17 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:07,159 INFO 
>> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:10,173 INFO 
>> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:13,188 INFO 
>> 

Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Lei Cao
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line 
easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula 
> wrote:

Please check “hostname –i” .



1)  What’s configured in the “master” file.(you shared only slave file).?


2)  Can you able to “ping master”?



3)  Can you configure like this check once..?
1.1.1.1 master


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: user@hadoop.apache.org
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

Some additional info -
OS: CentOS 7
RAM: 8GB

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak 
> wrote:
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP address 
only as per company policy, so that original IP addresses are not shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula 
> wrote:
Are you sure that you are starting in same machine (master)..?

Please share “/etc/hosts” and configuration files..


Regards
Brahma Reddy Battula

From: Bhushan Pathak 
[mailto:bhushan.patha...@gmail.com]
Sent: 27 April 2017 17:18
To: user@hadoop.apache.org
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated 
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, 
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. 
The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] 
java.net.BindException: Cannot assign requested address; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
at org.apache.hadoop.ipc.Server.(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:345)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at 
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at 

RE: unsubscribe

2017-04-27 Thread Brahma Reddy Battula

Kindly send an email to 
user-unsubscr...@hadoop.apache.org


-Brahma

From: shanker valipireddy [mailto:shanker.valipire...@gmail.com]
Sent: 28 April 2017 03:40
To: user-subscr...@hadoop.apache.org; gene...@hadoop.apache.org; user
Subject: unsubscribe



--
Thanks & Regards,
Shanker


unsubscribe

2017-04-27 Thread shanker valipireddy
-- 
Thanks & Regards,
Shanker


RE: Hadoop clustsr

2017-04-27 Thread Jon Morisi
I saw some HDP v1 sandboxes here fwiw: 
https://hortonworks.com/downloads/#sandbox (under archive).

From: Ahmed Altaj [mailto:ahmed.al...@yahoo.com.INVALID]
Sent: Thursday, April 27, 2017 9:04 AM
To: User 
Subject: Hadoop clustsr

Hi,

Is there any one have image of 3 nodes cluster of hadoop version 1.

Best regards


Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Hilmi Egemen Ciritoğlu
Can you check is port(51150) in use from other process:

sudo netstat -tulpn | grep '51150'

Regards,
Egemen

2017-04-27 11:04 GMT+01:00 Bhushan Pathak :

> Yes, I'm running the command on the master node.
>
> Attached are the config files & the hosts file. I have updated the IP
> address only as per company policy, so that original IP addresses are not
> shared.
>
> The same config files & hosts file exist on all 3 nodes.
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
>> Are you sure that you are starting in same machine (master)..?
>>
>>
>>
>> Please share “/etc/hosts” and configuration files..
>>
>>
>>
>>
>>
>> Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
>> *Sent:* 27 April 2017 17:18
>> *To:* user@hadoop.apache.org
>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>
>>
>>
>> Hello
>>
>>
>>
>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>
>>
>>
>> When I execute start-dfs.sh on the master node, the namenode does not
>> start. The logs contain the following error -
>>
>> 2017-04-27 14:17:57,166 ERROR 
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> Failed to start namenode.
>>
>> java.net.BindException: Problem binding to [master:51150]
>> java.net.BindException: Cannot assign requested address; For more details
>> see:  http://wiki.apache.org/hadoop/BindException
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>> ConstructorAccessorImpl.java:62)
>>
>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>> legatingConstructorAccessorImpl.java:45)
>>
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>> 23)
>>
>> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.
>> java:792)
>>
>> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:
>> 721)
>>
>> at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>
>> at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
>>
>> at org.apache.hadoop.ipc.Server.(Server.java:2215)
>>
>> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
>>
>> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(
>> ProtobufRpcEngine.java:534)
>>
>> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>> cEngine.java:509)
>>
>> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.> it>(NameNodeRpcServer.java:345)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>> ver(NameNode.java:674)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>> ameNode.java:647)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameN
>> ode.java:812)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameN
>> ode.java:796)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>> de(NameNode.java:1493)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>> e.java:1559)
>>
>> Caused by: java.net.BindException: Cannot assign requested address
>>
>> at sun.nio.ch.Net.bind0(Native Method)
>>
>> at sun.nio.ch.Net.bind(Net.java:433)
>>
>> at sun.nio.ch.Net.bind(Net.java:425)
>>
>> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>> mpl.java:223)
>>
>> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.
>> java:74)
>>
>> at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>
>> ... 13 more
>>
>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 1
>>
>> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>> SHUTDOWN_MSG:
>>
>> /
>>
>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>
>> /
>>
>>
>>
>>
>>
>>
>>
>> I have changed the port number multiple times, every time I get the same
>> error. How do I get past this?
>>
>>
>>
>>
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org
>


Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Vinayakumar B
I think you might need to change the IP itself.

Try something similar to 192.168.1.20

-Vinay

On 27 Apr 2017 8:20 pm, "Bhushan Pathak"  wrote:

> Hello
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
> at org.apache.hadoop.ipc.Server.bind(Server.java:425)
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
> at org.apache.hadoop.ipc.Server.(Server.java:2215)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
> init>(NameNodeRpcServer.java:345)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createRpcServer(NameNode.java:674)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
> NameNode.java:647)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(
> NameNode.java:812)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(
> NameNode.java:796)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createNameNode(NameNode.java:1493)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
> NameNode.java:1559)
> Caused by: java.net.BindException: Cannot assign requested address
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(
> ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
> at org.apache.hadoop.ipc.Server.bind(Server.java:408)
> ... 13 more
> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
> /
>
>
>
> I have changed the port number multiple times, every time I get the same
> error. How do I get past this?
>
>
>
> Thanks
> Bhushan Pathak
>


Hadoop clustsr

2017-04-27 Thread Ahmed Altaj
Hi,
Is there any one have image of 3 nodes cluster of hadoop version 1.
Best regards

Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Bhushan Pathak
Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml,
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not
start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [master:51150]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
at org.apache.hadoop.ipc.Server.(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:345)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:796)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1
2017-04-27 14:17:57,176 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
/



I have changed the port number multiple times, every time I get the same
error. How do I get past this?



Thanks
Bhushan Pathak


unsubscribe

2017-04-27 Thread Bourre, Marc
unsubscribe





Re: HDFS HA(Based on QJM) Failover Frequently with Large FSimage andBusy Requests

2017-04-27 Thread gu.yizhou
1. Is service-rpc configured in namenode?


Not yet, I was considered to configure servicerpc, but I was thinking about the 
possible disadvantages as well. 


When failover is happened  because of too many waiting rpcs, if zkfc gets 
normal process from another port, is it possiable that the clients get a lot of 
failures?






2. ha.health-monitor.rpc-timeout.ms - Also consider increasing zkfc rpc call 
timeout to namenode. 


The same worry,  is it possiable that the clients get a lot of failures?






Thanks very much,


Doris










---










1. Is service-rpc configured in namenode?
(dfs.namenode.servicerpc-address - this will create another RPC server 
listening on another port (say 8021) to handle all service (non-client) 
requests and hence default rpc address (say 8020) will handle only client 
requests.) 

By doing this way, you would be able to decouple client and service requests. 
Here service requests corresponds to rpc calls from DN, ZKFC etc. Hence when 
cluster is too busy because of too many client operations, ZKFC requests will 
get processed by different rpc and hence need not wait in same queue as client 
requests.)  

2. ha.health-monitor.rpc-timeout.ms - Also consider increasing zkfc rpc call 
timeout to namenode. 

By default this is 45 secs. You can consider increasing it to 1 or 2 mins 
depending upon your cluster usage.

Thanks,
Chackra 




On Wed, Apr 26, 2017 at 11:50 AM,  <gu.yiz...@zte.com.cn> wrote:


Hi All,



HDFS HA (Based on QJM) , 5 journalnodes, Apache 2.5.0 on Redhat 6.5 with 
JDK1.7.


Put 1P+ data into HDFS with FSimage about 10G, then keep on making more 
requests to this HDFS, namenodes failover frequently. Wanna to know something 
as follows:






1.ANN(active namenode) downloading fsimage.ckpt_* from SNN(standby 
namenode) leads to very high disk io, at the same time, zkfc fails to monitor 
the health of ann due to timeout. Is there any releationship between high disk 
io and zkfc monitor request timeout? Every failover happened when ckpt 
download, but not every ckpt download leads to failover.











2017-03-15 09:27:05,750 WARN org.apache.hadoop.ha.HealthMonitor: 
Transport-level exception trying to monitor health of NameNode at nn1/ip:8020: 
Call From nn1/ip to nn1:8020 failed on socket timeout exception: 
java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel 
to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/ip:48536 remote=nn1/ip:8020] For more details see:  
http://wiki.apache.org/hadoop/SocketTimeout


2017-03-15 09:27:05,750 INFO org.apache.hadoop.ha.HealthMonitor: Entering state 
SERVICE_NOT_RESPONDING




2.Due to SERVICE_NOT_RESPONDING, another zkfc fences the old ann(configed 
sshfence), before restart by my additional monitor, old ann log sometimes shows 
like this, what is "Rescan of postponedMisreplicatedBlocks"? Does this have any 
reletionships with failover?

2017-03-15 04:36:00,866 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: 
Rescanning after 3 milliseconds

2017-03-15 04:36:00,931 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 
0 directive(s) and 0 block(s) in 65 millisecond(s).

2017-03-15 04:36:01,127 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Rescan of 
postponedMisreplicatedBlocks completed in 23 msecs. 247361 blocks are left. 0 
blocks are removed.

2017-03-15 04:36:04,145 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Rescan of 
postponedMisreplicatedBlocks completed in 17 msecs. 247361 blocks are left. 0 
blocks are removed.

2017-03-15 04:36:07,159 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Rescan of 
postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks are left. 0 
blocks are removed.

2017-03-15 04:36:10,173 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Rescan of 
postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks are left. 0 
blocks are removed.

2017-03-15 04:36:13,188 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Rescan of 
postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks are left. 0 
blocks are removed.

2017-03-15 04:36:16,211 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Rescan of 
postponedMisreplicatedBlocks completed in 23 msecs. 247361 blocks are left. 0 
blocks are removed.

2017-03-15 04:36:19,234 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Rescan of 
postponedMisreplicatedBlocks completed in 22 msecs. 247361 blocks are left. 0 
blocks are removed.

2017-03-15 04:36:28,994 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
STARTUP_MSG:






3.I config two dfs.namenode.name.dir and one 
dfs.journalnode.edits.dir(which shares one disk with nn), is it suitable? Or 
does this have any 

Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Bhushan Pathak
Some additional info -
OS: CentOS 7
RAM: 8GB

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak 
wrote:

> Yes, I'm running the command on the master node.
>
> Attached are the config files & the hosts file. I have updated the IP
> address only as per company policy, so that original IP addresses are not
> shared.
>
> The same config files & hosts file exist on all 3 nodes.
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
>> Are you sure that you are starting in same machine (master)..?
>>
>>
>>
>> Please share “/etc/hosts” and configuration files..
>>
>>
>>
>>
>>
>> Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
>> *Sent:* 27 April 2017 17:18
>> *To:* user@hadoop.apache.org
>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>
>>
>>
>> Hello
>>
>>
>>
>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>
>>
>>
>> When I execute start-dfs.sh on the master node, the namenode does not
>> start. The logs contain the following error -
>>
>> 2017-04-27 14:17:57,166 ERROR 
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> Failed to start namenode.
>>
>> java.net.BindException: Problem binding to [master:51150]
>> java.net.BindException: Cannot assign requested address; For more details
>> see:  http://wiki.apache.org/hadoop/BindException
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>> ConstructorAccessorImpl.java:62)
>>
>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>> legatingConstructorAccessorImpl.java:45)
>>
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>> 23)
>>
>> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.
>> java:792)
>>
>> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:
>> 721)
>>
>> at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>
>> at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
>>
>> at org.apache.hadoop.ipc.Server.(Server.java:2215)
>>
>> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
>>
>> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(
>> ProtobufRpcEngine.java:534)
>>
>> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>> cEngine.java:509)
>>
>> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.> it>(NameNodeRpcServer.java:345)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>> ver(NameNode.java:674)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>> ameNode.java:647)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameN
>> ode.java:812)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameN
>> ode.java:796)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>> de(NameNode.java:1493)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>> e.java:1559)
>>
>> Caused by: java.net.BindException: Cannot assign requested address
>>
>> at sun.nio.ch.Net.bind0(Native Method)
>>
>> at sun.nio.ch.Net.bind(Net.java:433)
>>
>> at sun.nio.ch.Net.bind(Net.java:425)
>>
>> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>> mpl.java:223)
>>
>> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.
>> java:74)
>>
>> at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>
>> ... 13 more
>>
>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 1
>>
>> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>> SHUTDOWN_MSG:
>>
>> /
>>
>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>
>> /
>>
>>
>>
>>
>>
>>
>>
>> I have changed the port number multiple times, every time I get the same
>> error. How do I get past this?
>>
>>
>>
>>
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>
>


Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Bhushan Pathak
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP
address only as per company policy, so that original IP addresses are not
shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

> Are you sure that you are starting in same machine (master)..?
>
>
>
> Please share “/etc/hosts” and configuration files..
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
> *Sent:* 27 April 2017 17:18
> *To:* user@hadoop.apache.org
> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Hello
>
>
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
>
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
>
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
>
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
>
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
>
> at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
>
> at org.apache.hadoop.ipc.Server.(Server.java:2215)
>
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
>
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
>
> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
>
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
> init>(NameNodeRpcServer.java:345)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createRpcServer(NameNode.java:674)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
> NameNode.java:647)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(
> NameNode.java:812)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(
> NameNode.java:796)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createNameNode(NameNode.java:1493)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
> NameNode.java:1559)
>
> Caused by: java.net.BindException: Cannot assign requested address
>
> at sun.nio.ch.Net.bind0(Native Method)
>
> at sun.nio.ch.Net.bind(Net.java:433)
>
> at sun.nio.ch.Net.bind(Net.java:425)
>
> at sun.nio.ch.ServerSocketChannelImpl.bind(
> ServerSocketChannelImpl.java:223)
>
> at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
>
> at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>
> ... 13 more
>
> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
>
> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> SHUTDOWN_MSG:
>
> /
>
> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>
> /
>
>
>
>
>
>
>
> I have changed the port number multiple times, every time I get the same
> error. How do I get past this?
>
>
>
>
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
>








fs.defaultFS
hdfs://1.1.1.1:51150




hadoop-env.sh
Description: Bourne shell script








dfs.namenode.name.dir
file:/mnt/hadoop_store/datanode


dfs.datanode.name.dir
file:/mnt/hadoop_store/namenode





hosts
Description: Binary data








mapreduce.framework.name
yarn




slaves
Description: Binary data





yarn.resourcemanager.resource-tracker.address
1.1.1.1:8025

	
yarn.resourcemanager.scheduler.address
1.1.1.1:8030

	
yarn.resourcemanager.address
1.1.1.1:8050

	
yarn.nodemanager.aux-services
mapreduce_shuffle

	
yarn.nodemanager.aux-services.mapreduce.shuffle.class

RE: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Brahma Reddy Battula
Are you sure that you are starting in same machine (master)..?

Please share “/etc/hosts” and configuration files..


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
Sent: 27 April 2017 17:18
To: user@hadoop.apache.org
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated 
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, 
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. 
The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] 
java.net.BindException: Cannot assign requested address; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
at org.apache.hadoop.ipc.Server.(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:345)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at 
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
/



I have changed the port number multiple times, every time I get the same error. 
How do I get past this?



Thanks
Bhushan Pathak



Fwd: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Bhushan Pathak
Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml,
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not
start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
Failed to start namenode.
java.net.BindException: Problem binding to [master:51150]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(
NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
at org.apache.hadoop.ipc.Server.(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
init>(ProtobufRpcEngine.java:534)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
init>(NameNodeRpcServer.java:345)
at org.apache.hadoop.hdfs.server.namenode.NameNode.
createRpcServer(NameNode.java:674)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
NameNode.java:647)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(
NameNode.java:812)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(
NameNode.java:796)
at org.apache.hadoop.hdfs.server.namenode.NameNode.
createNameNode(NameNode.java:1493)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(
ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
/



I have changed the port number multiple times, every time I get the same
error. How do I get past this?



Thanks
Bhushan Pathak