Namenode not starting

2018-10-16 Thread Atul Rajan
Hello community,

My cluster was up till last time since today my namenode is suddenly turned off 
and when i am stopping n starting again datanodes are not stopping gracefully 
Can you please guide me how to bring up the namenode from CLI 

Sent from my iPhone
-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



Re: namenode not starting

2012-08-24 Thread Nitin Pawar
did you run the command bin/hadoop namenode -format before starting
the namenode ?

On Fri, Aug 24, 2012 at 12:58 PM, Abhay Ratnaparkhi
 wrote:
> Hello,
>
> I had a running hadoop cluster.
> I restarted it and after that namenode is unable to start. I am getting
> error saying that it's not formatted. :(
> Is it possible to recover the data on HDFS?
>
> 2012-08-24 03:17:55,378 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
> 2012-08-24 03:17:55,380 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>
> Regards,
> Abhay
>
>



-- 
Nitin Pawar


Re: namenode not starting

2012-08-24 Thread vivek
hi,
Have u rubn the command namenode -format???
Thanks & regards ,
Vivek

On Fri, Aug 24, 2012 at 12:58 PM, Abhay Ratnaparkhi <
abhay.ratnapar...@gmail.com> wrote:

> Hello,
>
> I had a running hadoop cluster.
> I restarted it and after that namenode is unable to start. I am getting
> error saying that it's not formatted. :(
> Is it possible to recover the data on HDFS?
>
> 2012-08-24 03:17:55,378 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
> 2012-08-24 03:17:55,380 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>
> Regards,
> Abhay
>
>
>


-- 







Thanks and Regards,

VIVEK KOUL


Re: namenode not starting

2012-08-24 Thread Bejoy KS
Hi Abhay

What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to /tmp 
the contents would be deleted on a OS restart. You need to change this location 
before you start your NN.
Regards
Bejoy KS

Sent from handheld, please excuse typos.

-Original Message-
From: Abhay Ratnaparkhi 
Date: Fri, 24 Aug 2012 12:58:41 
To: 
Reply-To: user@hadoop.apache.org
Subject: namenode not starting

Hello,

I had a running hadoop cluster.
I restarted it and after that namenode is unable to start. I am getting
error saying that it's not formatted. :(
Is it possible to recover the data on HDFS?

2012-08-24 03:17:55,378 ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
java.io.IOException: NameNode is not formatted.
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
2012-08-24 03:17:55,380 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
NameNode is not formatted.
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)

Regards,
Abhay



Re: namenode not starting

2012-08-24 Thread Abhay Ratnaparkhi
Hello,

I was using cluster for long time and not formatted the namenode.
I ran bin/stop-all.sh and bin/start-all.sh scripts only.

I am using NFS for dfs.name.dir.
hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any way to
recover the data?

Thanks,
Abhay

On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS  wrote:

> **
> Hi Abhay
>
> What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to
> /tmp the contents would be deleted on a OS restart. You need to change this
> location before you start your NN.
> Regards
> Bejoy KS
>
> Sent from handheld, please excuse typos.
> --
> *From: * Abhay Ratnaparkhi 
> *Date: *Fri, 24 Aug 2012 12:58:41 +0530
> *To: *
> *ReplyTo: * user@hadoop.apache.org
> *Subject: *namenode not starting
>
> Hello,
>
> I had a running hadoop cluster.
> I restarted it and after that namenode is unable to start. I am getting
> error saying that it's not formatted. :(
> Is it possible to recover the data on HDFS?
>
> 2012-08-24 03:17:55,378 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
> 2012-08-24 03:17:55,380 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>
> Regards,
> Abhay
>
>
>


Re: namenode not starting

2012-08-24 Thread Håvard Wahl Kongsgård
You should start with a reboot of the system.

A lesson to everyone, this is exactly why you should have a secondary
name node 
(http://wiki.apache.org/hadoop/FAQ#What_is_the_purpose_of_the_secondary_name-node.3F)
and run the namenode a mirrored RAID-5/10 disk.


-Håvard



On Fri, Aug 24, 2012 at 9:40 AM, Abhay Ratnaparkhi
 wrote:
> Hello,
>
> I was using cluster for long time and not formatted the namenode.
> I ran bin/stop-all.sh and bin/start-all.sh scripts only.
>
> I am using NFS for dfs.name.dir.
> hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any way to
> recover the data?
>
> Thanks,
> Abhay
>
>
> On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS  wrote:
>>
>> Hi Abhay
>>
>> What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to
>> /tmp the contents would be deleted on a OS restart. You need to change this
>> location before you start your NN.
>> Regards
>> Bejoy KS
>>
>> Sent from handheld, please excuse typos.
>> 
>> From: Abhay Ratnaparkhi 
>> Date: Fri, 24 Aug 2012 12:58:41 +0530
>> To: 
>> ReplyTo: user@hadoop.apache.org
>> Subject: namenode not starting
>>
>> Hello,
>>
>> I had a running hadoop cluster.
>> I restarted it and after that namenode is unable to start. I am getting
>> error saying that it's not formatted. :(
>> Is it possible to recover the data on HDFS?
>>
>> 2012-08-24 03:17:55,378 ERROR
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>> initialization failed.
>> java.io.IOException: NameNode is not formatted.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>> 2012-08-24 03:17:55,380 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> NameNode is not formatted.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>>
>> Regards,
>> Abhay
>>
>>
>



-- 
Håvard Wahl Kongsgård
Faculty of Medicine &
Department of Mathematical Sciences
NTNU

http://havard.security-review.net/


RE: namenode not starting

2012-08-24 Thread Leo Leung
Abhay,
  Sounds like your namenode cannot find the metadata information it needs to 
start (the /current | image | *checppints etc)

  Basically, if you cannot locate that data locally or on your NFS Server,  
your cluster is busted.

  But, let's us be optimistic about this. 

 There is a chance that your NFS Server is down or the path mounted is lost.

  If it is NFS mounted (as you suggested) check that your host still have that 
path mounted. (from the proper NFS Server)
  ( [shell] mount ) can tell. 
  * obviously if you originally mounted from foo:/mydata  and now do 
bar:/mydata /you'll need to do some digging to find which NFS server it was 
writing to before.

 Failing to locate your namenode metadata (locally or on any of your NFS 
Server)  either because the NFS Server decided to become a blackhole, or 
some removed it.

  And you don't have a backup of your namenode (tape or Secondary Namenode),  
  I think you are in a world of hurt there.

  In theory you can read the blocks on the DN and try to recover some of your 
data (assume not in CODEC / compressed) .
Humm.. anyone knows about recovery services? (^^)



-Original Message-
From: Håvard Wahl Kongsgård [mailto:haavard.kongsga...@gmail.com] 
Sent: Friday, August 24, 2012 5:38 AM
To: user@hadoop.apache.org
Subject: Re: namenode not starting

You should start with a reboot of the system.

A lesson to everyone, this is exactly why you should have a secondary name node 
(http://wiki.apache.org/hadoop/FAQ#What_is_the_purpose_of_the_secondary_name-node.3F)
and run the namenode a mirrored RAID-5/10 disk.


-Håvard



On Fri, Aug 24, 2012 at 9:40 AM, Abhay Ratnaparkhi 
 wrote:
> Hello,
>
> I was using cluster for long time and not formatted the namenode.
> I ran bin/stop-all.sh and bin/start-all.sh scripts only.
>
> I am using NFS for dfs.name.dir.
> hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any 
> way to recover the data?
>
> Thanks,
> Abhay
>
>
> On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS  wrote:
>>
>> Hi Abhay
>>
>> What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set 
>> to /tmp the contents would be deleted on a OS restart. You need to 
>> change this location before you start your NN.
>> Regards
>> Bejoy KS
>>
>> Sent from handheld, please excuse typos.
>> ____
>> From: Abhay Ratnaparkhi 
>> Date: Fri, 24 Aug 2012 12:58:41 +0530
>> To: 
>> ReplyTo: user@hadoop.apache.org
>> Subject: namenode not starting
>>
>> Hello,
>>
>> I had a running hadoop cluster.
>> I restarted it and after that namenode is unable to start. I am 
>> getting error saying that it's not formatted. :( Is it possible to 
>> recover the data on HDFS?
>>
>> 2012-08-24 03:17:55,378 ERROR
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
>> initialization failed.
>> java.io.IOException: NameNode is not formatted.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:13
>> 68)
>> 2012-08-24 03:17:55,380 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> NameNode is not formatted.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.N

RE: namenode not starting

2012-08-24 Thread Siddharth Tiwari

Hi Abhay,

I totaly conform with Bejoy. Can you paste your mapred-site.xml and 
hdfs-site.xml content here ?

**

Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
"Every duty is holy, and devotion to duty is the highest form of worship of 
God.” 

"Maybe other people will try to limit me but I don't limit myself"


> From: lle...@ddn.com
> To: user@hadoop.apache.org
> Subject: RE: namenode not starting
> Date: Fri, 24 Aug 2012 16:38:01 +
> 
> Abhay,
>   Sounds like your namenode cannot find the metadata information it needs to 
> start (the /current | image | *checppints etc)
> 
>   Basically, if you cannot locate that data locally or on your NFS Server,  
> your cluster is busted.
> 
>   But, let's us be optimistic about this. 
> 
>  There is a chance that your NFS Server is down or the path mounted is lost.
> 
>   If it is NFS mounted (as you suggested) check that your host still have 
> that path mounted. (from the proper NFS Server)
>   ( [shell] mount ) can tell. 
>   * obviously if you originally mounted from foo:/mydata  and now do 
> bar:/mydata /you'll need to do some digging to find which NFS server it 
> was writing to before.
> 
>  Failing to locate your namenode metadata (locally or on any of your NFS 
> Server)  either because the NFS Server decided to become a blackhole, or 
> some removed it.
> 
>   And you don't have a backup of your namenode (tape or Secondary Namenode),  
>   I think you are in a world of hurt there.
> 
>   In theory you can read the blocks on the DN and try to recover some of your 
> data (assume not in CODEC / compressed) .
> Humm.. anyone knows about recovery services? (^^)
> 
> 
> 
> -Original Message-
> From: Håvard Wahl Kongsgård [mailto:haavard.kongsga...@gmail.com] 
> Sent: Friday, August 24, 2012 5:38 AM
> To: user@hadoop.apache.org
> Subject: Re: namenode not starting
> 
> You should start with a reboot of the system.
> 
> A lesson to everyone, this is exactly why you should have a secondary name 
> node 
> (http://wiki.apache.org/hadoop/FAQ#What_is_the_purpose_of_the_secondary_name-node.3F)
> and run the namenode a mirrored RAID-5/10 disk.
> 
> 
> -Håvard
> 
> 
> 
> On Fri, Aug 24, 2012 at 9:40 AM, Abhay Ratnaparkhi 
>  wrote:
> > Hello,
> >
> > I was using cluster for long time and not formatted the namenode.
> > I ran bin/stop-all.sh and bin/start-all.sh scripts only.
> >
> > I am using NFS for dfs.name.dir.
> > hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any 
> > way to recover the data?
> >
> > Thanks,
> > Abhay
> >
> >
> > On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS  wrote:
> >>
> >> Hi Abhay
> >>
> >> What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set 
> >> to /tmp the contents would be deleted on a OS restart. You need to 
> >> change this location before you start your NN.
> >> Regards
> >> Bejoy KS
> >>
> >> Sent from handheld, please excuse typos.
> >> 
> >> From: Abhay Ratnaparkhi 
> >> Date: Fri, 24 Aug 2012 12:58:41 +0530
> >> To: 
> >> ReplyTo: user@hadoop.apache.org
> >> Subject: namenode not starting
> >>
> >> Hello,
> >>
> >> I had a running hadoop cluster.
> >> I restarted it and after that namenode is unable to start. I am 
> >> getting error saying that it's not formatted. :( Is it possible to 
> >> recover the data on HDFS?
> >>
> >> 2012-08-24 03:17:55,378 ERROR
> >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> >> initialization failed.
> >> java.io.IOException: NameNode is not formatted.
> >> at
> >> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
> >> at
> >> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> at
> >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
> >> at
> >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
> >> at
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
> >> at
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
> >> at
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode

Re: namenode not starting

2012-08-25 Thread Harsh J
Abhay,

I suspect that if you haven't set your dfs.name.dir explicitly, then
you haven't set fs.checkpoint.dir either, and since both use
hadoop.tmp.dir paths, you may have lost your data completely and there
is no recovery possible now.

On Fri, Aug 24, 2012 at 1:10 PM, Abhay Ratnaparkhi
 wrote:
> Hello,
>
> I was using cluster for long time and not formatted the namenode.
> I ran bin/stop-all.sh and bin/start-all.sh scripts only.
>
> I am using NFS for dfs.name.dir.
> hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any way to
> recover the data?
>
> Thanks,
> Abhay
>
>
> On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS  wrote:
>>
>> Hi Abhay
>>
>> What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to
>> /tmp the contents would be deleted on a OS restart. You need to change this
>> location before you start your NN.
>> Regards
>> Bejoy KS
>>
>> Sent from handheld, please excuse typos.
>> 
>> From: Abhay Ratnaparkhi 
>> Date: Fri, 24 Aug 2012 12:58:41 +0530
>> To: 
>> ReplyTo: user@hadoop.apache.org
>> Subject: namenode not starting
>>
>> Hello,
>>
>> I had a running hadoop cluster.
>> I restarted it and after that namenode is unable to start. I am getting
>> error saying that it's not formatted. :(
>> Is it possible to recover the data on HDFS?
>>
>> 2012-08-24 03:17:55,378 ERROR
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>> initialization failed.
>> java.io.IOException: NameNode is not formatted.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>> 2012-08-24 03:17:55,380 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> NameNode is not formatted.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>>
>> Regards,
>> Abhay
>>
>>
>



-- 
Harsh J


Re: namenode not starting

2012-08-26 Thread Abhay Ratnaparkhi
Thank you Harsh,

I have set "dfs.name.dir" explicitly. Still don't know why the data loss
has happened.


  dfs.name.dir
  /wsadfs/${host.name}/name
  Determines where on the local filesystem the DFS name node
  should store the name table.  If this is a comma-delimited list
  of directories then the name table is replicated in all of the
  directories, for redundancy. 


The secondary namenode was same as namenode. Does this affect  anyway since
path of "dfs.name.dir" were same?
I have now configured another machine as secondary namenode.
I have now  formatted the filesystem since not seen any way of recovering.

I have some questions.

1. Apart from setting secondary namenode what are the other techniques used
for namenode directory backups?
2. Is there any way or tools to recover some of the data even if namenode
crashes.

Regards,
Abhay




On Sat, Aug 25, 2012 at 7:45 PM, Harsh J  wrote:

> Abhay,
>
> I suspect that if you haven't set your dfs.name.dir explicitly, then
> you haven't set fs.checkpoint.dir either, and since both use
> hadoop.tmp.dir paths, you may have lost your data completely and there
> is no recovery possible now.
>
> On Fri, Aug 24, 2012 at 1:10 PM, Abhay Ratnaparkhi
>  wrote:
> > Hello,
> >
> > I was using cluster for long time and not formatted the namenode.
> > I ran bin/stop-all.sh and bin/start-all.sh scripts only.
> >
> > I am using NFS for dfs.name.dir.
> > hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any way
> to
> > recover the data?
> >
> > Thanks,
> > Abhay
> >
> >
> > On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS 
> wrote:
> >>
> >> Hi Abhay
> >>
> >> What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to
> >> /tmp the contents would be deleted on a OS restart. You need to change
> this
> >> location before you start your NN.
> >> Regards
> >> Bejoy KS
> >>
> >> Sent from handheld, please excuse typos.
> >> 
> >> From: Abhay Ratnaparkhi 
> >> Date: Fri, 24 Aug 2012 12:58:41 +0530
> >> To: 
> >> ReplyTo: user@hadoop.apache.org
> >> Subject: namenode not starting
> >>
> >> Hello,
> >>
> >> I had a running hadoop cluster.
> >> I restarted it and after that namenode is unable to start. I am getting
> >> error saying that it's not formatted. :(
> >> Is it possible to recover the data on HDFS?
> >>
> >> 2012-08-24 03:17:55,378 ERROR
> >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> >> initialization failed.
> >> java.io.IOException: NameNode is not formatted.
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
> >> at
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
> >> 2012-08-24 03:17:55,380 ERROR
> >> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> >> NameNode is not formatted.
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
> >> at
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
> >>
> >> Regards,
> >> Abhay
> >>
> >>
> >
>
>
>
> --
> Harsh J
>


Re: namenode not starting

2012-08-26 Thread Mohammad Tariq
Hello Abhay,

Along with dfs.name.dir, also include dfs.data.dir in hdfs-site.xml.

On Monday, August 27, 2012, Abhay Ratnaparkhi 
wrote:
> Thank you Harsh,
> I have set "dfs.name.dir" explicitly. Still don't know why the data loss
has happened.
> 
>   dfs.name.dir
>   /wsadfs/${host.name}/name
>   Determines where on the local filesystem the DFS name node
>   should store the name table.  If this is a comma-delimited list
>   of directories then the name table is replicated in all of the
>   directories, for redundancy. 
> 
> The secondary namenode was same as namenode. Does this affect  anyway
since path of "dfs.name.dir" were same?
> I have now configured another machine as secondary namenode.
> I have now  formatted the filesystem since not seen any way of
recovering.
> I have some questions.
> 1. Apart from setting secondary namenode what are the other techniques
used for namenode directory backups?
> 2. Is there any way or tools to recover some of the data even if namenode
crashes.
> Regards,
> Abhay
>
>
>
>
> On Sat, Aug 25, 2012 at 7:45 PM, Harsh J  wrote:
>
> Abhay,
>
> I suspect that if you haven't set your dfs.name.dir explicitly, then
> you haven't set fs.checkpoint.dir either, and since both use
> hadoop.tmp.dir paths, you may have lost your data completely and there
> is no recovery possible now.
>
> On Fri, Aug 24, 2012 at 1:10 PM, Abhay Ratnaparkhi
>  wrote:
>> Hello,
>>
>> I was using cluster for long time and not formatted the namenode.
>> I ran bin/stop-all.sh and bin/start-all.sh scripts only.
>>
>> I am using NFS for dfs.name.dir.
>> hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any way
to
>> recover the data?
>>
>> Thanks,
>> Abhay
>>
>>
>> On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS  wrote:
>>>
>>> Hi Abhay
>>>
>>> What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to
>>> /tmp the contents would be deleted on a OS restart. You need to change
this
>>> location before you start your NN.
>>> Regards
>>> Bejoy KS
>>>
>>> Sent from handheld, please excuse typos.
>>> 
>>> From: Abhay Ratnaparkhi 
>>> Date: Fri, 24 Aug 2012 12:58:41 +0530
>>> To: 
>>> ReplyTo: user@hadoop.apache.org
>>> Subject: namenode not starting
>>>
>>> Hello,
>>>
>>> I had a running hadoop cluster.
>>> I restarted it and after that namenode is unable to start. I am getting
>>> error saying that it's not formatted. :(
>>> Is it possible to recover the data on HDFS?
>>>
>>> 2012-08-24 03:17:55,378 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>> initialization failed.
>>> java.io.IOException: NameNode is not formatted.
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>>> 2012-08-24 03:17:55,380 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> NameNode is not formatted.
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
>>> at
>>>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(Nam

-- 
Regards,
Mohammad Tariq


Re: namenode not starting

2012-08-27 Thread Harsh J
Abhay,

On Mon, Aug 27, 2012 at 11:19 AM, Abhay Ratnaparkhi
 wrote:
> Thank you Harsh,
>
> I have set "dfs.name.dir" explicitly. Still don't know why the data loss has
> happened.
>
> 
>   dfs.name.dir
>   /wsadfs/${host.name}/name
>   Determines where on the local filesystem the DFS name node
>   should store the name table.  If this is a comma-delimited list
>   of directories then the name table is replicated in all of the
>   directories, for redundancy. 
> 

Sorry, I missed you had said NFS above. Is the data not present at all
in that directory there?

> The secondary namenode was same as namenode. Does this affect  anyway since
> path of "dfs.name.dir" were same?
> I have now configured another machine as secondary namenode.
> I have now  formatted the filesystem since not seen any way of recovering.
>
> I have some questions.
>
> 1. Apart from setting secondary namenode what are the other techniques used
> for namenode directory backups?

Duplicate dfs.name.dir directories are what we use in production. That
is, at least two paths, one local FS and another NFS mounted:

dfs.name.dir = /path/to/local/dfs/name,/path/to/nfs/dfs/name

This will give you two copies of good metadata, and loss of one can
still be handled.

> 2. Is there any way or tools to recover some of the data even if namenode
> crashes.

If there's any form of fsimage/edits left, a manual/automated recovery
can be made via tools such as oiv/oev and the NN's "-recover" flag, if
your version has it, or even with a hexdump and some time.

If there's no trace of fsimage files, its backups from any date, any
SNN checkpoints from past, then the metadata is all gone and there's
no recovery.

-- 
Harsh J


RE: namenode not starting

2012-08-27 Thread Leo Leung
I suggested a while back to Abhay, he should check/track-down  the data on his 
NFS server / dir.

  Now the ${host.name} looks odd.

  I hope this is something you (Abhay)  edited out and not something that "IS" 
in the xml file.

  If it is, please fix that.  It needs to be a static name not a variable.



-Original Message-
From: Harsh J [mailto:ha...@cloudera.com] 
Sent: Monday, August 27, 2012 12:30 AM
To: user@hadoop.apache.org
Subject: Re: namenode not starting

Abhay,

On Mon, Aug 27, 2012 at 11:19 AM, Abhay Ratnaparkhi 
 wrote:
> Thank you Harsh,
>
> I have set "dfs.name.dir" explicitly. Still don't know why the data 
> loss has happened.
>
> 
>   dfs.name.dir
>   /wsadfs/${host.name}/name
>   Determines where on the local filesystem the DFS name node
>   should store the name table.  If this is a comma-delimited list
>   of directories then the name table is replicated in all of the
>   directories, for redundancy.  

Sorry, I missed you had said NFS above. Is the data not present at all in that 
directory there?

> The secondary namenode was same as namenode. Does this affect  anyway 
> since path of "dfs.name.dir" were same?
> I have now configured another machine as secondary namenode.
> I have now  formatted the filesystem since not seen any way of recovering.
>
> I have some questions.
>
> 1. Apart from setting secondary namenode what are the other techniques 
> used for namenode directory backups?

Duplicate dfs.name.dir directories are what we use in production. That is, at 
least two paths, one local FS and another NFS mounted:

dfs.name.dir = /path/to/local/dfs/name,/path/to/nfs/dfs/name

This will give you two copies of good metadata, and loss of one can still be 
handled.

> 2. Is there any way or tools to recover some of the data even if 
> namenode crashes.

If there's any form of fsimage/edits left, a manual/automated recovery can be 
made via tools such as oiv/oev and the NN's "-recover" flag, if your version 
has it, or even with a hexdump and some time.

If there's no trace of fsimage files, its backups from any date, any SNN 
checkpoints from past, then the metadata is all gone and there's no recovery.

--
Harsh J


Re: Namenode not starting

2018-10-16 Thread razo
Basically, if the datanodes crushed or did not stooped gracefully then it is 
not a big deal as the data is still inside them and the location of where are 
all the block files are is on the namenode (metadata).
Thus for that, I won't be worry and you can always kill them with kill command 
based on the process name (use jps).
When the namenode crush it is much more tragic but the metadata would stay on 
the output directory (that you should have written as part of the cluster setup 
in dfs.namenode.name.dir hdfs-site.xml) with all the checkpoint files.
start-dfs.sh doesn't work to initialize the namenode, correct?

On 2018/10/16 17:48:34, Atul Rajan  wrote: 
> Hello community,
> 
> My cluster was up till last time since today my namenode is suddenly turned 
> off and when i am stopping n starting again datanodes are not stopping 
> gracefully 
> Can you please guide me how to bring up the namenode from CLI 
> 
> Sent from my iPhone
> -
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org
> 
> 

-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Bhushan Pathak
Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml,
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not
start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [master:51150]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
at org.apache.hadoop.ipc.Server.(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:345)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:796)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1
2017-04-27 14:17:57,176 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
/



I have changed the port number multiple times, every time I get the same
error. How do I get past this?



Thanks
Bhushan Pathak


Fwd: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Bhushan Pathak
Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml,
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not
start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
Failed to start namenode.
java.net.BindException: Problem binding to [master:51150]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(
NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
at org.apache.hadoop.ipc.Server.(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
init>(ProtobufRpcEngine.java:534)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
init>(NameNodeRpcServer.java:345)
at org.apache.hadoop.hdfs.server.namenode.NameNode.
createRpcServer(NameNode.java:674)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
NameNode.java:647)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(
NameNode.java:812)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(
NameNode.java:796)
at org.apache.hadoop.hdfs.server.namenode.NameNode.
createNameNode(NameNode.java:1493)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(
ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
/



I have changed the port number multiple times, every time I get the same
error. How do I get past this?



Thanks
Bhushan Pathak


RE: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Brahma Reddy Battula
Are you sure that you are starting in same machine (master)..?

Please share “/etc/hosts” and configuration files..


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
Sent: 27 April 2017 17:18
To: user@hadoop.apache.org
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated 
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, 
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. 
The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] 
java.net.BindException: Cannot assign requested address; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapWithMessage(NetUtils.java:792)
at 
org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
at org.apache.hadoop.ipc.Server.(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:345)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch<http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at 
sun.nio.ch<http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1<http://1.1.1.1>
/



I have changed the port number multiple times, every time I get the same error. 
How do I get past this?



Thanks
Bhushan Pathak



Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Bhushan Pathak
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP
address only as per company policy, so that original IP addresses are not
shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

> Are you sure that you are starting in same machine (master)..?
>
>
>
> Please share “/etc/hosts” and configuration files..
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
> *Sent:* 27 April 2017 17:18
> *To:* user@hadoop.apache.org
> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Hello
>
>
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
>
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
>
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
>
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
>
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
>
> at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
>
> at org.apache.hadoop.ipc.Server.(Server.java:2215)
>
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
>
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
>
> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
>
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
> init>(NameNodeRpcServer.java:345)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createRpcServer(NameNode.java:674)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
> NameNode.java:647)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(
> NameNode.java:812)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(
> NameNode.java:796)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createNameNode(NameNode.java:1493)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
> NameNode.java:1559)
>
> Caused by: java.net.BindException: Cannot assign requested address
>
> at sun.nio.ch.Net.bind0(Native Method)
>
> at sun.nio.ch.Net.bind(Net.java:433)
>
> at sun.nio.ch.Net.bind(Net.java:425)
>
> at sun.nio.ch.ServerSocketChannelImpl.bind(
> ServerSocketChannelImpl.java:223)
>
> at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
>
> at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>
> ... 13 more
>
> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
>
> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> SHUTDOWN_MSG:
>
> /
>
> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>
> /
>
>
>
>
>
>
>
> I have changed the port number multiple times, every time I get the same
> error. How do I get past this?
>
>
>
>
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
>








fs.defaultFS
hdfs://1.1.1.1:51150




hadoop-env.sh
Description: Bourne shell script








dfs.namenode.name.dir
file:/mnt/hadoop_store/datanode


dfs.datanode.name.dir
file:/mnt/hadoop_store/namenode





hosts
Description: B

Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Bhushan Pathak
Some additional info -
OS: CentOS 7
RAM: 8GB

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak 
wrote:

> Yes, I'm running the command on the master node.
>
> Attached are the config files & the hosts file. I have updated the IP
> address only as per company policy, so that original IP addresses are not
> shared.
>
> The same config files & hosts file exist on all 3 nodes.
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
>> Are you sure that you are starting in same machine (master)..?
>>
>>
>>
>> Please share “/etc/hosts” and configuration files..
>>
>>
>>
>>
>>
>> Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
>> *Sent:* 27 April 2017 17:18
>> *To:* user@hadoop.apache.org
>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>
>>
>>
>> Hello
>>
>>
>>
>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>
>>
>>
>> When I execute start-dfs.sh on the master node, the namenode does not
>> start. The logs contain the following error -
>>
>> 2017-04-27 14:17:57,166 ERROR 
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> Failed to start namenode.
>>
>> java.net.BindException: Problem binding to [master:51150]
>> java.net.BindException: Cannot assign requested address; For more details
>> see:  http://wiki.apache.org/hadoop/BindException
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>> ConstructorAccessorImpl.java:62)
>>
>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>> legatingConstructorAccessorImpl.java:45)
>>
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>> 23)
>>
>> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.
>> java:792)
>>
>> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:
>> 721)
>>
>> at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>
>> at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
>>
>> at org.apache.hadoop.ipc.Server.(Server.java:2215)
>>
>> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
>>
>> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(
>> ProtobufRpcEngine.java:534)
>>
>> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>> cEngine.java:509)
>>
>> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.> it>(NameNodeRpcServer.java:345)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>> ver(NameNode.java:674)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>> ameNode.java:647)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameN
>> ode.java:812)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameN
>> ode.java:796)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>> de(NameNode.java:1493)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>> e.java:1559)
>>
>> Caused by: java.net.BindException: Cannot assign requested address
>>
>> at sun.nio.ch.Net.bind0(Native Method)
>>
>> at sun.nio.ch.Net.bind(Net.java:433)
>>
>> at sun.nio.ch.Net.bind(Net.java:425)
>>
>> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>> mpl.java:223)
>>
>> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.
>> java:74)
>>
>> at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>
>> ... 13 more
>>
>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 1
>>
>> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>> SHUTDOWN_MSG:
>>
>> /
>>
>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>
>> /
>>
>>
>>
>>
>>
>>
>>
>> I have changed the port number multiple times, every time I get the same
>> error. How do I get past this?
>>
>>
>>
>>
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>
>


Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Vinayakumar B
I think you might need to change the IP itself.

Try something similar to 192.168.1.20

-Vinay

On 27 Apr 2017 8:20 pm, "Bhushan Pathak"  wrote:

> Hello
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
> at org.apache.hadoop.ipc.Server.bind(Server.java:425)
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
> at org.apache.hadoop.ipc.Server.(Server.java:2215)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
> init>(NameNodeRpcServer.java:345)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createRpcServer(NameNode.java:674)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
> NameNode.java:647)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(
> NameNode.java:812)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(
> NameNode.java:796)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createNameNode(NameNode.java:1493)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
> NameNode.java:1559)
> Caused by: java.net.BindException: Cannot assign requested address
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(
> ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
> at org.apache.hadoop.ipc.Server.bind(Server.java:408)
> ... 13 more
> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
> /
>
>
>
> I have changed the port number multiple times, every time I get the same
> error. How do I get past this?
>
>
>
> Thanks
> Bhushan Pathak
>


Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Hilmi Egemen Ciritoğlu
Can you check is port(51150) in use from other process:

sudo netstat -tulpn | grep '51150'

Regards,
Egemen

2017-04-27 11:04 GMT+01:00 Bhushan Pathak :

> Yes, I'm running the command on the master node.
>
> Attached are the config files & the hosts file. I have updated the IP
> address only as per company policy, so that original IP addresses are not
> shared.
>
> The same config files & hosts file exist on all 3 nodes.
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
>> Are you sure that you are starting in same machine (master)..?
>>
>>
>>
>> Please share “/etc/hosts” and configuration files..
>>
>>
>>
>>
>>
>> Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
>> *Sent:* 27 April 2017 17:18
>> *To:* user@hadoop.apache.org
>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>
>>
>>
>> Hello
>>
>>
>>
>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>
>>
>>
>> When I execute start-dfs.sh on the master node, the namenode does not
>> start. The logs contain the following error -
>>
>> 2017-04-27 14:17:57,166 ERROR 
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> Failed to start namenode.
>>
>> java.net.BindException: Problem binding to [master:51150]
>> java.net.BindException: Cannot assign requested address; For more details
>> see:  http://wiki.apache.org/hadoop/BindException
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>> ConstructorAccessorImpl.java:62)
>>
>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>> legatingConstructorAccessorImpl.java:45)
>>
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>> 23)
>>
>> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.
>> java:792)
>>
>> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:
>> 721)
>>
>> at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>
>> at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
>>
>> at org.apache.hadoop.ipc.Server.(Server.java:2215)
>>
>> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
>>
>> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(
>> ProtobufRpcEngine.java:534)
>>
>> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>> cEngine.java:509)
>>
>> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.> it>(NameNodeRpcServer.java:345)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>> ver(NameNode.java:674)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>> ameNode.java:647)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameN
>> ode.java:812)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameN
>> ode.java:796)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>> de(NameNode.java:1493)
>>
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>> e.java:1559)
>>
>> Caused by: java.net.BindException: Cannot assign requested address
>>
>> at sun.nio.ch.Net.bind0(Native Method)
>>
>> at sun.nio.ch.Net.bind(Net.java:433)
>>
>> at sun.nio.ch.Net.bind(Net.java:425)
>>
>> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>> mpl.java:223)
>>
>> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.
>> java:74)
>>
>> at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>
>> ... 13 more
>>
>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 1
>>
>> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>> SHUTDOWN_MSG:
>>
>> /
>>
>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>
>> /
>>
>>
>>
>>
>>
>>
>>
>> I have changed the port number multiple times, every time I get the same
>> error. How do I get past this?
>>
>>
>>
>>
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org
>


RE: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Brahma Reddy Battula
Please check “hostname –i” .



1)  What’s configured in the “master” file.(you shared only slave file).?


2)  Can you able to “ping master”?



3)  Can you configure like this check once..?
1.1.1.1 master


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: user@hadoop.apache.org
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

Some additional info -
OS: CentOS 7
RAM: 8GB

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak 
mailto:bhushan.patha...@gmail.com>> wrote:
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP address 
only as per company policy, so that original IP addresses are not shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula 
mailto:brahmareddy.batt...@huawei.com>> wrote:
Are you sure that you are starting in same machine (master)..?

Please share “/etc/hosts” and configuration files..


Regards
Brahma Reddy Battula

From: Bhushan Pathak 
[mailto:bhushan.patha...@gmail.com<mailto:bhushan.patha...@gmail.com>]
Sent: 27 April 2017 17:18
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated 
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, 
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. 
The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] 
java.net.BindException: Cannot assign requested address; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapWithMessage(NetUtils.java:792)
at 
org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
at org.apache.hadoop.ipc.Server.(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:345)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch<http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at 
sun.nio.ch<http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1<http://1.1.1.1>
/



I have changed the port number multiple times, every time I get the same error. 
How do I get past this?



Thanks
Bhushan Pathak





Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Lei Cao
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line 
easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula 
mailto:brahmareddy.batt...@huawei.com>> wrote:

Please check “hostname –i” .



1)  What’s configured in the “master” file.(you shared only slave file).?


2)  Can you able to “ping master”?



3)  Can you configure like this check once..?
1.1.1.1 master


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

Some additional info -
OS: CentOS 7
RAM: 8GB

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak 
mailto:bhushan.patha...@gmail.com>> wrote:
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP address 
only as per company policy, so that original IP addresses are not shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula 
mailto:brahmareddy.batt...@huawei.com>> wrote:
Are you sure that you are starting in same machine (master)..?

Please share “/etc/hosts” and configuration files..


Regards
Brahma Reddy Battula

From: Bhushan Pathak 
[mailto:bhushan.patha...@gmail.com<mailto:bhushan.patha...@gmail.com>]
Sent: 27 April 2017 17:18
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated 
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, 
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. 
The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] 
java.net.BindException: Cannot assign requested address; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapWithMessage(NetUtils.java:792)
at 
org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
at org.apache.hadoop.ipc.Server.(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:345)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:796)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch<http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at 
sun.nio.ch<http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1

Re: Hadoop 2.7.3 cluster namenode not starting

2017-04-27 Thread Bhushan Pathak
Hello All,

1. The slave & master can ping each other as well as use passwordless SSH
2. The actual IP starts with 10.x.x.x, I have put in the config file as I
cannot share  the actual IP
3. The namenode is formatted. I executed the 'hdfs namenode -format' again
just to rule out the possibility
4. I did not configure anything in the master file. I don;t think Hadoop
2.7.3 has a master file to be configured
5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not give
any output.

Even if I change  the port number to a different one, say 52220, 5, I
still get the same error.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao  wrote:

> Hi Mr. Bhushan,
>
> Have you tried to format namenode?
> Here's the command:
> hdfs namenode -format
>
> I've encountered such problem as namenode cannot be started. This command
> line easily fixed my problem.
>
> Hope this can help you.
>
> Sincerely,
> Lei Cao
>
>
> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
> *Please check “hostname –i” .*
>
>
>
>
>
> *1)  **What’s configured in the “master” file.(you shared only slave
> file).?*
>
>
>
> *2)  **Can you able to “ping master”?*
>
>
>
> *3)  **Can you configure like this check once..?*
>
> *1.1.1.1 master*
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com
> ]
> *Sent:* 27 April 2017 18:16
> *To:* Brahma Reddy Battula
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Some additional info -
>
> OS: CentOS 7
>
> RAM: 8GB
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
> Thanks
>
> Bhushan Pathak
>
>
>
> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
> bhushan.patha...@gmail.com> wrote:
>
> Yes, I'm running the command on the master node.
>
>
>
> Attached are the config files & the hosts file. I have updated the IP
> address only as per company policy, so that original IP addresses are not
> shared.
>
>
>
> The same config files & hosts file exist on all 3 nodes.
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
> Thanks
>
> Bhushan Pathak
>
>
>
> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
> Are you sure that you are starting in same machine (master)..?
>
>
>
> Please share “/etc/hosts” and configuration files..
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
> *Sent:* 27 April 2017 17:18
> *To:* user@hadoop.apache.org
> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Hello
>
>
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
>
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
>
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
>
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
>
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
>
> at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:574)
>
> at org.apache.hadoop.ipc.Server.(Server.java:2215)
>
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:951)
>
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
>
> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
>
> at org.apache.hadoop.ipc.RPC$Builder.bui

Re: Hadoop 2.7.3 cluster namenode not starting

2017-05-02 Thread Sidharth Kumar
Can you check if the ports are opened by running telnet command.
Run below command from source machine to destination machine and check if
this help

$telnet  
Ex: $telnet 192.168.1.60 9000


Let's Hadooping!

Bests
Sidharth
Mob: +91 819799
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 28-Apr-2017 10:32 AM, "Bhushan Pathak" 
wrote:

> Hello All,
>
> 1. The slave & master can ping each other as well as use passwordless SSH
> 2. The actual IP starts with 10.x.x.x, I have put in the config file as I
> cannot share  the actual IP
> 3. The namenode is formatted. I executed the 'hdfs namenode -format' again
> just to rule out the possibility
> 4. I did not configure anything in the master file. I don;t think Hadoop
> 2.7.3 has a master file to be configured
> 5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not
> give any output.
>
> Even if I change  the port number to a different one, say 52220, 5, I
> still get the same error.
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao  wrote:
>
>> Hi Mr. Bhushan,
>>
>> Have you tried to format namenode?
>> Here's the command:
>> hdfs namenode -format
>>
>> I've encountered such problem as namenode cannot be started. This command
>> line easily fixed my problem.
>>
>> Hope this can help you.
>>
>> Sincerely,
>> Lei Cao
>>
>>
>> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
>> brahmareddy.batt...@huawei.com> wrote:
>>
>> *Please check “hostname –i” .*
>>
>>
>>
>>
>>
>> *1)  **What’s configured in the “master” file.(you shared only slave
>> file).?*
>>
>>
>>
>> *2)  **Can you able to “ping master”?*
>>
>>
>>
>> *3)  **Can you configure like this check once..?*
>>
>> *1.1.1.1 master*
>>
>>
>>
>>
>>
>> Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com
>> ]
>> *Sent:* 27 April 2017 18:16
>> *To:* Brahma Reddy Battula
>> *Cc:* user@hadoop.apache.org
>> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>>
>>
>>
>> Some additional info -
>>
>> OS: CentOS 7
>>
>> RAM: 8GB
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
>> bhushan.patha...@gmail.com> wrote:
>>
>> Yes, I'm running the command on the master node.
>>
>>
>>
>> Attached are the config files & the hosts file. I have updated the IP
>> address only as per company policy, so that original IP addresses are not
>> shared.
>>
>>
>>
>> The same config files & hosts file exist on all 3 nodes.
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
>> brahmareddy.batt...@huawei.com> wrote:
>>
>> Are you sure that you are starting in same machine (master)..?
>>
>>
>>
>> Please share “/etc/hosts” and configuration files..
>>
>>
>>
>>
>>
>> Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
>> *Sent:* 27 April 2017 17:18
>> *To:* user@hadoop.apache.org
>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>
>>
>>
>> Hello
>>
>>
>>
>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>
>>
>>
>> When I execute start-dfs.sh on the master node, the namenode does not
>> start. The logs contain the following error -
>>
>> 2017-04-27 14:17:57,166 ERROR 
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> Failed to start namenode.
>>
>> java.net.BindException: Problem binding to [master:51150]
>> java.net.BindException: Cannot assign requested address; For more details
>> see:  http://wiki.apache.org/hadoop/BindException
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>

Re: Hadoop 2.7.3 cluster namenode not starting

2017-05-16 Thread Bhushan Pathak
Apologies for the delayed reply, was away due to some personal issues.

I tried the telnet command as well, but no luck. I get the response that
'Name or service not known'

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar 
wrote:

> Can you check if the ports are opened by running telnet command.
> Run below command from source machine to destination machine and check if
> this help
>
> $telnet  
> Ex: $telnet 192.168.1.60 9000
>
>
> Let's Hadooping!
>
> Bests
> Sidharth
> Mob: +91 819799
> LinkedIn: www.linkedin.com/in/sidharthkumar2792
>
> On 28-Apr-2017 10:32 AM, "Bhushan Pathak" 
> wrote:
>
>> Hello All,
>>
>> 1. The slave & master can ping each other as well as use passwordless SSH
>> 2. The actual IP starts with 10.x.x.x, I have put in the config file as I
>> cannot share  the actual IP
>> 3. The namenode is formatted. I executed the 'hdfs namenode -format'
>> again just to rule out the possibility
>> 4. I did not configure anything in the master file. I don;t think Hadoop
>> 2.7.3 has a master file to be configured
>> 5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not
>> give any output.
>>
>> Even if I change  the port number to a different one, say 52220, 5, I
>> still get the same error.
>>
>> Thanks
>> Bhushan Pathak
>>
>> Thanks
>> Bhushan Pathak
>>
>> On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao 
>> wrote:
>>
>>> Hi Mr. Bhushan,
>>>
>>> Have you tried to format namenode?
>>> Here's the command:
>>> hdfs namenode -format
>>>
>>> I've encountered such problem as namenode cannot be started. This
>>> command line easily fixed my problem.
>>>
>>> Hope this can help you.
>>>
>>> Sincerely,
>>> Lei Cao
>>>
>>>
>>> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
>>> brahmareddy.batt...@huawei.com> wrote:
>>>
>>> *Please check “hostname –i” .*
>>>
>>>
>>>
>>>
>>>
>>> *1)  **What’s configured in the “master” file.(you shared only
>>> slave file).?*
>>>
>>>
>>>
>>> *2)  **Can you able to “ping master”?*
>>>
>>>
>>>
>>> *3)  **Can you configure like this check once..?*
>>>
>>> *1.1.1.1 master*
>>>
>>>
>>>
>>>
>>>
>>> Regards
>>>
>>> Brahma Reddy Battula
>>>
>>>
>>>
>>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com
>>> ]
>>> *Sent:* 27 April 2017 18:16
>>> *To:* Brahma Reddy Battula
>>> *Cc:* user@hadoop.apache.org
>>> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>>>
>>>
>>>
>>> Some additional info -
>>>
>>> OS: CentOS 7
>>>
>>> RAM: 8GB
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
>>> bhushan.patha...@gmail.com> wrote:
>>>
>>> Yes, I'm running the command on the master node.
>>>
>>>
>>>
>>> Attached are the config files & the hosts file. I have updated the IP
>>> address only as per company policy, so that original IP addresses are not
>>> shared.
>>>
>>>
>>>
>>> The same config files & hosts file exist on all 3 nodes.
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
>>> brahmareddy.batt...@huawei.com> wrote:
>>>
>>> Are you sure that you are starting in same machine (master)..?
>>>
>>>
>>>
>>> Please share “/etc/hosts” and configuration files..
>>>
>>>
>>>
>>>
>>>
>>> Regards
>>>
>>> Brahma Reddy Battula
>>>
>>>
>>>
>>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
>>> *Sent:* 27 April 2017 17:18
>>> *To:* user@hadoop.apache.org
>>> *Subject:*

Re: Hadoop 2.7.3 cluster namenode not starting

2017-05-17 Thread Sidharth Kumar
Hi,

The error you mentioned below " 'Name or service not known'" means servers
not able to communicate to each other. Check network configurations.

Sidharth
Mob: +91 819799
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 17-May-2017 12:13 PM, "Bhushan Pathak" 
wrote:

Apologies for the delayed reply, was away due to some personal issues.

I tried the telnet command as well, but no luck. I get the response that
'Name or service not known'

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar 
wrote:

> Can you check if the ports are opened by running telnet command.
> Run below command from source machine to destination machine and check if
> this help
>
> $telnet  
> Ex: $telnet 192.168.1.60 9000
>
>
> Let's Hadooping!
>
> Bests
> Sidharth
> Mob: +91 819799
> LinkedIn: www.linkedin.com/in/sidharthkumar2792
>
> On 28-Apr-2017 10:32 AM, "Bhushan Pathak" 
> wrote:
>
>> Hello All,
>>
>> 1. The slave & master can ping each other as well as use passwordless SSH
>> 2. The actual IP starts with 10.x.x.x, I have put in the config file as I
>> cannot share  the actual IP
>> 3. The namenode is formatted. I executed the 'hdfs namenode -format'
>> again just to rule out the possibility
>> 4. I did not configure anything in the master file. I don;t think Hadoop
>> 2.7.3 has a master file to be configured
>> 5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not
>> give any output.
>>
>> Even if I change  the port number to a different one, say 52220, 5, I
>> still get the same error.
>>
>> Thanks
>> Bhushan Pathak
>>
>> Thanks
>> Bhushan Pathak
>>
>> On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao 
>> wrote:
>>
>>> Hi Mr. Bhushan,
>>>
>>> Have you tried to format namenode?
>>> Here's the command:
>>> hdfs namenode -format
>>>
>>> I've encountered such problem as namenode cannot be started. This
>>> command line easily fixed my problem.
>>>
>>> Hope this can help you.
>>>
>>> Sincerely,
>>> Lei Cao
>>>
>>>
>>> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
>>> brahmareddy.batt...@huawei.com> wrote:
>>>
>>> *Please check “hostname –i” .*
>>>
>>>
>>>
>>>
>>>
>>> *1)  **What’s configured in the “master” file.(you shared only
>>> slave file).?*
>>>
>>>
>>>
>>> *2)  **Can you able to “ping master”?*
>>>
>>>
>>>
>>> *3)  **Can you configure like this check once..?*
>>>
>>> *1.1.1.1 master*
>>>
>>>
>>>
>>>
>>>
>>> Regards
>>>
>>> Brahma Reddy Battula
>>>
>>>
>>>
>>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com
>>> ]
>>> *Sent:* 27 April 2017 18:16
>>> *To:* Brahma Reddy Battula
>>> *Cc:* user@hadoop.apache.org
>>> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>>>
>>>
>>>
>>> Some additional info -
>>>
>>> OS: CentOS 7
>>>
>>> RAM: 8GB
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
>>> bhushan.patha...@gmail.com> wrote:
>>>
>>> Yes, I'm running the command on the master node.
>>>
>>>
>>>
>>> Attached are the config files & the hosts file. I have updated the IP
>>> address only as per company policy, so that original IP addresses are not
>>> shared.
>>>
>>>
>>>
>>> The same config files & hosts file exist on all 3 nodes.
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
>>> brahmareddy.batt...@huawei.com> wrote:
>>>
>>> Are you sure that you are starting in same machine (master)..?
>>>
>>>
>>>
>>> Please share “/etc/hosts” and configuration files..
>>>
>>>
>>>
>>

Re: Hadoop 2.7.3 cluster namenode not starting

2017-05-18 Thread Bhushan Pathak
What configuration do you want me to check? Each of the three nodes can
access each other via password-less SSH, can ping each other's IP.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 17, 2017 at 10:11 PM, Sidharth Kumar <
sidharthkumar2...@gmail.com> wrote:

> Hi,
>
> The error you mentioned below " 'Name or service not known'" means
> servers not able to communicate to each other. Check network configurations.
>
> Sidharth
> Mob: +91 819799
> LinkedIn: www.linkedin.com/in/sidharthkumar2792
>
> On 17-May-2017 12:13 PM, "Bhushan Pathak" 
> wrote:
>
> Apologies for the delayed reply, was away due to some personal issues.
>
> I tried the telnet command as well, but no luck. I get the response that
> 'Name or service not known'
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar <
> sidharthkumar2...@gmail.com> wrote:
>
>> Can you check if the ports are opened by running telnet command.
>> Run below command from source machine to destination machine and check if
>> this help
>>
>> $telnet  
>> Ex: $telnet 192.168.1.60 9000
>>
>>
>> Let's Hadooping!
>>
>> Bests
>> Sidharth
>> Mob: +91 819799
>> LinkedIn: www.linkedin.com/in/sidharthkumar2792
>>
>> On 28-Apr-2017 10:32 AM, "Bhushan Pathak" 
>> wrote:
>>
>>> Hello All,
>>>
>>> 1. The slave & master can ping each other as well as use passwordless SSH
>>> 2. The actual IP starts with 10.x.x.x, I have put in the config file as
>>> I cannot share  the actual IP
>>> 3. The namenode is formatted. I executed the 'hdfs namenode -format'
>>> again just to rule out the possibility
>>> 4. I did not configure anything in the master file. I don;t think Hadoop
>>> 2.7.3 has a master file to be configured
>>> 5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not
>>> give any output.
>>>
>>> Even if I change  the port number to a different one, say 52220, 5,
>>> I still get the same error.
>>>
>>> Thanks
>>> Bhushan Pathak
>>>
>>> Thanks
>>> Bhushan Pathak
>>>
>>> On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao 
>>> wrote:
>>>
>>>> Hi Mr. Bhushan,
>>>>
>>>> Have you tried to format namenode?
>>>> Here's the command:
>>>> hdfs namenode -format
>>>>
>>>> I've encountered such problem as namenode cannot be started. This
>>>> command line easily fixed my problem.
>>>>
>>>> Hope this can help you.
>>>>
>>>> Sincerely,
>>>> Lei Cao
>>>>
>>>>
>>>> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
>>>> brahmareddy.batt...@huawei.com> wrote:
>>>>
>>>> *Please check “hostname –i” .*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *1)  **What’s configured in the “master” file.(you shared only
>>>> slave file).?*
>>>>
>>>>
>>>>
>>>> *2)  **Can you able to “ping master”?*
>>>>
>>>>
>>>>
>>>> *3)  **Can you configure like this check once..?*
>>>>
>>>> *1.1.1.1 master*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Regards
>>>>
>>>> Brahma Reddy Battula
>>>>
>>>>
>>>>
>>>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com
>>>> ]
>>>> *Sent:* 27 April 2017 18:16
>>>> *To:* Brahma Reddy Battula
>>>> *Cc:* user@hadoop.apache.org
>>>> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>>>>
>>>>
>>>>
>>>> Some additional info -
>>>>
>>>> OS: CentOS 7
>>>>
>>>> RAM: 8GB
>>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Bhushan Pathak
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Bhushan Pathak
>>>>
>>>>
>>>>
>>>> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
>>>> bhushan.patha...@gmail.com> wrote:
>>>>
>>>> Yes, I'm running the command on the master node.
>>>>
>

Re: Hadoop 2.7.3 cluster namenode not starting

2017-05-18 Thread Donald Nelson

Hello Everyone,

I am planning to upgrade our Hadoop from v 1.0.4 to 2.7.3 together with 
hbase 0.94 to 1.3. Does anyone know of some steps that can help me?


Thanks in advance,

Donald Nelson


On 05/18/2017 12:39 PM, Bhushan Pathak wrote:
What configuration do you want me to check? Each of the three nodes 
can access each other via password-less SSH, can ping each other's IP.


Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 17, 2017 at 10:11 PM, Sidharth Kumar 
mailto:sidharthkumar2...@gmail.com>> wrote:


Hi,

The error you mentioned below " 'Name or service not known'" means
servers not able to communicate to each other. Check network
configurations.

Sidharth
Mob: +91 819799
LinkedIn: www.linkedin.com/in/sidharthkumar2792
<http://www.linkedin.com/in/sidharthkumar2792>

On 17-May-2017 12:13 PM, "Bhushan Pathak"
mailto:bhushan.patha...@gmail.com>>
wrote:

Apologies for the delayed reply, was away due to some personal
issues.

I tried the telnet command as well, but no luck. I get the
response that 'Name or service not known'

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar
mailto:sidharthkumar2...@gmail.com>> wrote:

Can you check if the ports are opened by running telnet
command.
Run below command from source machine to destination
machine and check if this help

$telnet  
Ex: $telnet 192.168.1.60 9000


Let's Hadooping!

Bests
Sidharth
Mob: +91 819799
LinkedIn: www.linkedin.com/in/sidharthkumar2792
<http://www.linkedin.com/in/sidharthkumar2792>

On 28-Apr-2017 10:32 AM, "Bhushan Pathak"
mailto:bhushan.patha...@gmail.com>> wrote:

Hello All,

1. The slave & master can ping each other as well as
use passwordless SSH
2. The actual IP starts with 10.x.x.x, I have put in
the config file as I cannot share  the actual IP
3. The namenode is formatted. I executed the 'hdfs
namenode -format' again just to rule out the possibility
4. I did not configure anything in the master file. I
don;t think Hadoop 2.7.3 has a master file to be
configured
5. The netstat command [sudo netstat -tulpn | grep
'51150'] does not give any output.

Even if I change  the port number to a different one,
say 52220, 5, I still get the same error.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao
mailto:charlie.c...@hotmail.com>> wrote:

Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot
be started. This command line easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula
mailto:brahmareddy.batt...@huawei.com>> wrote:


*Please check “hostname –i” .*

**

**

*1)**What’s configured in the “master” file.(you
shared only slave file).?*

**

*2)**Can you able to “ping master”?*

**

*3)**Can you configure like this check once..?*

*1.1.1.1 master*

Regards

Brahma Reddy Battula

*From:*Bhushan Pathak
[mailto:bhushan.patha...@gmail.com
<mailto:bhushan.patha...@gmail.com>]
*Sent:* 27 April 2017 18:16
*To:* Brahma Reddy Battula
    *Cc:* user@hadoop.apache.org
    <mailto:user@hadoop.apache.org>
*Subject:* Re: Hadoop 2.7.3 cluster namenode not
starting

Some additional info -

OS: CentOS 7

RAM: 8GB

Thanks

Bhushan Pathak


Thanks

Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak
mailto:bhushan.pa