Re: Namenode not starting

2018-10-16 Thread razo
Basically, if the datanodes crushed or did not stooped gracefully then it is 
not a big deal as the data is still inside them and the location of where are 
all the block files are is on the namenode (metadata).
Thus for that, I won't be worry and you can always kill them with kill command 
based on the process name (use jps).
When the namenode crush it is much more tragic but the metadata would stay on 
the output directory (that you should have written as part of the cluster setup 
in dfs.namenode.name.dir hdfs-site.xml) with all the checkpoint files.
start-dfs.sh doesn't work to initialize the namenode, correct?

On 2018/10/16 17:48:34, Atul Rajan  wrote: 
> Hello community,
> 
> My cluster was up till last time since today my namenode is suddenly turned 
> off and when i am stopping n starting again datanodes are not stopping 
> gracefully 
> Can you please guide me how to bring up the namenode from CLI 
> 
> Sent from my iPhone
> -
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org
> 
> 

-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



Re: namenode not starting

2012-08-26 Thread Abhay Ratnaparkhi
Thank you Harsh,

I have set dfs.name.dir explicitly. Still don't know why the data loss
has happened.

property
  namedfs.name.dir/name
  value/wsadfs/${host.name}/name/value
  descriptionDetermines where on the local filesystem the DFS name node
  should store the name table.  If this is a comma-delimited list
  of directories then the name table is replicated in all of the
  directories, for redundancy. /description
/property

The secondary namenode was same as namenode. Does this affect  anyway since
path of dfs.name.dir were same?
I have now configured another machine as secondary namenode.
I have now  formatted the filesystem since not seen any way of recovering.

I have some questions.

1. Apart from setting secondary namenode what are the other techniques used
for namenode directory backups?
2. Is there any way or tools to recover some of the data even if namenode
crashes.

Regards,
Abhay




On Sat, Aug 25, 2012 at 7:45 PM, Harsh J ha...@cloudera.com wrote:

 Abhay,

 I suspect that if you haven't set your dfs.name.dir explicitly, then
 you haven't set fs.checkpoint.dir either, and since both use
 hadoop.tmp.dir paths, you may have lost your data completely and there
 is no recovery possible now.

 On Fri, Aug 24, 2012 at 1:10 PM, Abhay Ratnaparkhi
 abhay.ratnapar...@gmail.com wrote:
  Hello,
 
  I was using cluster for long time and not formatted the namenode.
  I ran bin/stop-all.sh and bin/start-all.sh scripts only.
 
  I am using NFS for dfs.name.dir.
  hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any way
 to
  recover the data?
 
  Thanks,
  Abhay
 
 
  On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS bejoy.had...@gmail.com
 wrote:
 
  Hi Abhay
 
  What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to
  /tmp the contents would be deleted on a OS restart. You need to change
 this
  location before you start your NN.
  Regards
  Bejoy KS
 
  Sent from handheld, please excuse typos.
  
  From: Abhay Ratnaparkhi abhay.ratnapar...@gmail.com
  Date: Fri, 24 Aug 2012 12:58:41 +0530
  To: user@hadoop.apache.org
  ReplyTo: user@hadoop.apache.org
  Subject: namenode not starting
 
  Hello,
 
  I had a running hadoop cluster.
  I restarted it and after that namenode is unable to start. I am getting
  error saying that it's not formatted. :(
  Is it possible to recover the data on HDFS?
 
  2012-08-24 03:17:55,378 ERROR
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
  initialization failed.
  java.io.IOException: NameNode is not formatted.
  at
 
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
  at
 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
  at
 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
  at
 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
  at
  org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
  2012-08-24 03:17:55,380 ERROR
  org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
  NameNode is not formatted.
  at
 
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
  at
 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
  at
 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
  at
 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
  at
  org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
 
  Regards,
  Abhay
 
 
 



 --
 Harsh J



Re: namenode not starting

2012-08-24 Thread Nitin Pawar
did you run the command bin/hadoop namenode -format before starting
the namenode ?

On Fri, Aug 24, 2012 at 12:58 PM, Abhay Ratnaparkhi
abhay.ratnapar...@gmail.com wrote:
 Hello,

 I had a running hadoop cluster.
 I restarted it and after that namenode is unable to start. I am getting
 error saying that it's not formatted. :(
 Is it possible to recover the data on HDFS?

 2012-08-24 03:17:55,378 ERROR
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
 initialization failed.
 java.io.IOException: NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
 2012-08-24 03:17:55,380 ERROR
 org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
 NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)

 Regards,
 Abhay





-- 
Nitin Pawar


Re: namenode not starting

2012-08-24 Thread vivek
hi,
Have u rubn the command namenode -format???
Thanks  regards ,
Vivek

On Fri, Aug 24, 2012 at 12:58 PM, Abhay Ratnaparkhi 
abhay.ratnapar...@gmail.com wrote:

 Hello,

 I had a running hadoop cluster.
 I restarted it and after that namenode is unable to start. I am getting
 error saying that it's not formatted. :(
 Is it possible to recover the data on HDFS?

 2012-08-24 03:17:55,378 ERROR
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
 initialization failed.
 java.io.IOException: NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
 2012-08-24 03:17:55,380 ERROR
 org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
 NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)

 Regards,
 Abhay





-- 







Thanks and Regards,

VIVEK KOUL


Re: namenode not starting

2012-08-24 Thread Bejoy KS
Hi Abhay

What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to /tmp 
the contents would be deleted on a OS restart. You need to change this location 
before you start your NN.
Regards
Bejoy KS

Sent from handheld, please excuse typos.

-Original Message-
From: Abhay Ratnaparkhi abhay.ratnapar...@gmail.com
Date: Fri, 24 Aug 2012 12:58:41 
To: user@hadoop.apache.org
Reply-To: user@hadoop.apache.org
Subject: namenode not starting

Hello,

I had a running hadoop cluster.
I restarted it and after that namenode is unable to start. I am getting
error saying that it's not formatted. :(
Is it possible to recover the data on HDFS?

2012-08-24 03:17:55,378 ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
java.io.IOException: NameNode is not formatted.
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
2012-08-24 03:17:55,380 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
NameNode is not formatted.
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)

Regards,
Abhay



Re: namenode not starting

2012-08-24 Thread Abhay Ratnaparkhi
Hello,

I was using cluster for long time and not formatted the namenode.
I ran bin/stop-all.sh and bin/start-all.sh scripts only.

I am using NFS for dfs.name.dir.
hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any way to
recover the data?

Thanks,
Abhay

On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS bejoy.had...@gmail.com wrote:

 **
 Hi Abhay

 What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to
 /tmp the contents would be deleted on a OS restart. You need to change this
 location before you start your NN.
 Regards
 Bejoy KS

 Sent from handheld, please excuse typos.
 --
 *From: * Abhay Ratnaparkhi abhay.ratnapar...@gmail.com
 *Date: *Fri, 24 Aug 2012 12:58:41 +0530
 *To: *user@hadoop.apache.org
 *ReplyTo: * user@hadoop.apache.org
 *Subject: *namenode not starting

 Hello,

 I had a running hadoop cluster.
 I restarted it and after that namenode is unable to start. I am getting
 error saying that it's not formatted. :(
 Is it possible to recover the data on HDFS?

 2012-08-24 03:17:55,378 ERROR
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
 initialization failed.
 java.io.IOException: NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
 2012-08-24 03:17:55,380 ERROR
 org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
 NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)

 Regards,
 Abhay





Re: namenode not starting

2012-08-24 Thread Håvard Wahl Kongsgård
You should start with a reboot of the system.

A lesson to everyone, this is exactly why you should have a secondary
name node 
(http://wiki.apache.org/hadoop/FAQ#What_is_the_purpose_of_the_secondary_name-node.3F)
and run the namenode a mirrored RAID-5/10 disk.


-Håvard



On Fri, Aug 24, 2012 at 9:40 AM, Abhay Ratnaparkhi
abhay.ratnapar...@gmail.com wrote:
 Hello,

 I was using cluster for long time and not formatted the namenode.
 I ran bin/stop-all.sh and bin/start-all.sh scripts only.

 I am using NFS for dfs.name.dir.
 hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any way to
 recover the data?

 Thanks,
 Abhay


 On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS bejoy.had...@gmail.com wrote:

 Hi Abhay

 What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to
 /tmp the contents would be deleted on a OS restart. You need to change this
 location before you start your NN.
 Regards
 Bejoy KS

 Sent from handheld, please excuse typos.
 
 From: Abhay Ratnaparkhi abhay.ratnapar...@gmail.com
 Date: Fri, 24 Aug 2012 12:58:41 +0530
 To: user@hadoop.apache.org
 ReplyTo: user@hadoop.apache.org
 Subject: namenode not starting

 Hello,

 I had a running hadoop cluster.
 I restarted it and after that namenode is unable to start. I am getting
 error saying that it's not formatted. :(
 Is it possible to recover the data on HDFS?

 2012-08-24 03:17:55,378 ERROR
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
 initialization failed.
 java.io.IOException: NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
 2012-08-24 03:17:55,380 ERROR
 org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
 NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)

 Regards,
 Abhay






-- 
Håvard Wahl Kongsgård
Faculty of Medicine 
Department of Mathematical Sciences
NTNU

http://havard.security-review.net/


RE: namenode not starting

2012-08-24 Thread Siddharth Tiwari

Hi Abhay,

I totaly conform with Bejoy. Can you paste your mapred-site.xml and 
hdfs-site.xml content here ?

**

Cheers !!!

Siddharth Tiwari

Have a refreshing day !!!
Every duty is holy, and devotion to duty is the highest form of worship of 
God.” 

Maybe other people will try to limit me but I don't limit myself


 From: lle...@ddn.com
 To: user@hadoop.apache.org
 Subject: RE: namenode not starting
 Date: Fri, 24 Aug 2012 16:38:01 +
 
 Abhay,
   Sounds like your namenode cannot find the metadata information it needs to 
 start (the path/current | image | *checppints etc)
 
   Basically, if you cannot locate that data locally or on your NFS Server,  
 your cluster is busted.
 
   But, let's us be optimistic about this. 
 
  There is a chance that your NFS Server is down or the path mounted is lost.
 
   If it is NFS mounted (as you suggested) check that your host still have 
 that path mounted. (from the proper NFS Server)
   ( [shell] mount ) can tell. 
   * obviously if you originally mounted from foo:/mydata  and now do 
 bar:/mydata /you'll need to do some digging to find which NFS server it 
 was writing to before.
 
  Failing to locate your namenode metadata (locally or on any of your NFS 
 Server)  either because the NFS Server decided to become a blackhole, or 
 someone|thing removed it.
 
   And you don't have a backup of your namenode (tape or Secondary Namenode),  
   I think you are in a world of hurt there.
 
   In theory you can read the blocks on the DN and try to recover some of your 
 data (assume not in CODEC / compressed) .
 Humm.. anyone knows about recovery services? (^^)
 
 
 
 -Original Message-
 From: Håvard Wahl Kongsgård [mailto:haavard.kongsga...@gmail.com] 
 Sent: Friday, August 24, 2012 5:38 AM
 To: user@hadoop.apache.org
 Subject: Re: namenode not starting
 
 You should start with a reboot of the system.
 
 A lesson to everyone, this is exactly why you should have a secondary name 
 node 
 (http://wiki.apache.org/hadoop/FAQ#What_is_the_purpose_of_the_secondary_name-node.3F)
 and run the namenode a mirrored RAID-5/10 disk.
 
 
 -Håvard
 
 
 
 On Fri, Aug 24, 2012 at 9:40 AM, Abhay Ratnaparkhi 
 abhay.ratnapar...@gmail.com wrote:
  Hello,
 
  I was using cluster for long time and not formatted the namenode.
  I ran bin/stop-all.sh and bin/start-all.sh scripts only.
 
  I am using NFS for dfs.name.dir.
  hadoop.tmp.dir is a /tmp directory. I've not restarted the OS.  Any 
  way to recover the data?
 
  Thanks,
  Abhay
 
 
  On Fri, Aug 24, 2012 at 1:01 PM, Bejoy KS bejoy.had...@gmail.com wrote:
 
  Hi Abhay
 
  What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set 
  to /tmp the contents would be deleted on a OS restart. You need to 
  change this location before you start your NN.
  Regards
  Bejoy KS
 
  Sent from handheld, please excuse typos.
  
  From: Abhay Ratnaparkhi abhay.ratnapar...@gmail.com
  Date: Fri, 24 Aug 2012 12:58:41 +0530
  To: user@hadoop.apache.org
  ReplyTo: user@hadoop.apache.org
  Subject: namenode not starting
 
  Hello,
 
  I had a running hadoop cluster.
  I restarted it and after that namenode is unable to start. I am 
  getting error saying that it's not formatted. :( Is it possible to 
  recover the data on HDFS?
 
  2012-08-24 03:17:55,378 ERROR
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
  initialization failed.
  java.io.IOException: NameNode is not formatted.
  at
  org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
  at
  org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
  at
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
  at
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270)
  at
  org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271)
  at
  org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303)
  at
  org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:433)
  at
  org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:421)
  at
  org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
  at
  org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:13
  68)
  2012-08-24 03:17:55,380 ERROR
  org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
  NameNode is not formatted.
  at
  org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:434)
  at
  org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
  at
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291)
  at
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:270

Re: namenode not starting

2012-04-10 Thread tousif
Shaharyar khan,

Can you include your namenode log details.

On Tue, Apr 10, 2012 at 6:30 PM, shaharyar khan shaharyar.khan...@gmail.com
 wrote:


 when i try to start hadoop then its all services like TaskTracker,
 JobTracker, DataNode ,SecondaryNameNode are running except NameNode.So,
 HBase is unable to find hadoop as namenode is not up.So please guide me why
 this is happening.i have checked all configuration files as well as
 iptables
 / firewall configuration for that port . everything is ok but unable to
 know
 why this is happening
 --
 View this message in context:
 http://old.nabble.com/namenode-not-starting-tp33661239p33661239.html
 Sent from the Hadoop core-user mailing list archive at Nabble.com.




-- 

Tousif
+918050227279


Re: Namenode not starting

2011-09-01 Thread abhishek sharma
Hi Hailong,

I have installed JDK and set JAVA_HOME correctly (as far as I know).

Output of java -version is:
java version 1.6.0_04
Java(TM) SE Runtime Environment (build 1.6.0_04-b12)
Java HotSpot(TM) Server VM (build 10.0-b19, mixed mode)

I also have another version installed 1.6.0_27 but get same error with it.

Abhishek

On Thu, Sep 1, 2011 at 4:00 PM, hailong.yang1115
hailong.yang1...@gmail.com wrote:
 Hi abhishek,

 Have you successfully installed java virtual machine like sun JDK before 
 running Hadoop? Or maybe you forget to configure the environment variable 
 JAVA_HOME? What is the output of the command 'java -version'?

 Regards

 Hailong




 ***
 * Hailong Yang, PhD. Candidate
 * Sino-German Joint Software Institute,
 * School of Computer ScienceEngineering, Beihang University
 * Phone: (86-010)82315908
 * Email: hailong.yang1...@gmail.com
 * Address: G413, New Main Building in Beihang University,
 *              No.37 XueYuan Road,HaiDian District,
 *              Beijing,P.R.China,100191
 ***

 From: abhishek sharma
 Date: 2011-09-02 03:51
 To: common-user; common-dev
 Subject: Namenode not starting
 Hi all,

 I am trying to install Hadoop (release 0.20.203) on a machine with CentOS.

 When I try to start HDFS, I get the following error.

 machine-name: Unrecognized option: -jvm
 machine-name: Could not create the Java virtual machine.

 Any idea what might be the problem?

 Thanks,
 Abhishek


Re: Namenode not starting

2011-09-01 Thread abhishek sharma
Actually, I found the reason. I am running HDFS as root and there is
a bug that has recently been fixed.

https://issues.apache.org/jira/browse/HDFS-1943

Thanks,
Abhishek

On Thu, Sep 1, 2011 at 6:25 PM, Ravi Prakash ravihad...@gmail.com wrote:
 Hi Abhishek,

 Try reading through the shell scripts before postiing. They are short and
 simple enough and you should be able to debug them quite easily. I've seen
 the same error many times.

 Do you see JAVA_HOME set when you $ssh localhost?

 Also which command are you using to start the daemons?

 Fight on,
 Ravi

 On Thu, Sep 1, 2011 at 4:35 PM, abhishek sharma absha...@usc.edu wrote:

 Hi Hailong,

 I have installed JDK and set JAVA_HOME correctly (as far as I know).

 Output of java -version is:
 java version 1.6.0_04
 Java(TM) SE Runtime Environment (build 1.6.0_04-b12)
 Java HotSpot(TM) Server VM (build 10.0-b19, mixed mode)

 I also have another version installed 1.6.0_27 but get same error with
 it.

 Abhishek

 On Thu, Sep 1, 2011 at 4:00 PM, hailong.yang1115
 hailong.yang1...@gmail.com wrote:
  Hi abhishek,
 
  Have you successfully installed java virtual machine like sun JDK before
 running Hadoop? Or maybe you forget to configure the environment variable
 JAVA_HOME? What is the output of the command 'java -version'?
 
  Regards
 
  Hailong
 
 
 
 
  ***
  * Hailong Yang, PhD. Candidate
  * Sino-German Joint Software Institute,
  * School of Computer ScienceEngineering, Beihang University
  * Phone: (86-010)82315908
  * Email: hailong.yang1...@gmail.com
  * Address: G413, New Main Building in Beihang University,
  *              No.37 XueYuan Road,HaiDian District,
  *              Beijing,P.R.China,100191
  ***
 
  From: abhishek sharma
  Date: 2011-09-02 03:51
  To: common-user; common-dev
  Subject: Namenode not starting
  Hi all,
 
  I am trying to install Hadoop (release 0.20.203) on a machine with
 CentOS.
 
  When I try to start HDFS, I get the following error.
 
  machine-name: Unrecognized option: -jvm
  machine-name: Could not create the Java virtual machine.
 
  Any idea what might be the problem?
 
  Thanks,
  Abhishek




Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread Steve Loughran

On 06/07/2011 10:50 AM, praveenesh kumar wrote:

The logs say


The ratio of reported blocks 0.9091 has not reached the threshold 0.9990.
Safe mode will be turned off automatically.



not enough datanodes reported in, or they are missing data


Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread praveenesh kumar
But I dnt have any data on my HDFS.. I was having some date before.. but now
I deleted all the files from HDFS..
I dnt know why datanodes are taking time to start.. I guess because of this
exception its taking more time to start.

On Tue, Jun 7, 2011 at 3:34 PM, Steve Loughran ste...@apache.org wrote:

 On 06/07/2011 10:50 AM, praveenesh kumar wrote:

 The logs say


  The ratio of reported blocks 0.9091 has not reached the threshold 0.9990.
 Safe mode will be turned off automatically.



 not enough datanodes reported in, or they are missing data



Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread jagaran das
Check two things:

1. Some of your data node is getting connected, that means password less SSH is 
not working within nodes.
2. Then Clear the Dir where you data is persisted in data nodes and format the 
namenode.

It should definitely work then

Cheers,
Jagaran 




From: praveenesh kumar praveen...@gmail.com
To: common-user@hadoop.apache.org
Sent: Tue, 7 June, 2011 3:14:01 AM
Subject: Re: NameNode is starting with exceptions whenever its trying to start 
datanodes

But I dnt have any data on my HDFS.. I was having some date before.. but now
I deleted all the files from HDFS..
I dnt know why datanodes are taking time to start.. I guess because of this
exception its taking more time to start.

On Tue, Jun 7, 2011 at 3:34 PM, Steve Loughran ste...@apache.org wrote:

 On 06/07/2011 10:50 AM, praveenesh kumar wrote:

 The logs say


  The ratio of reported blocks 0.9091 has not reached the threshold 0.9990.
 Safe mode will be turned off automatically.



 not enough datanodes reported in, or they are missing data



Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread praveenesh kumar
1. Some of your data node is getting connected, that means password less
SSH is
not working within nodes.

So you mean that passwordless SSH should be there among datanodes also.
In hadoop we used to do password less SSH from namenode to data nodes
Do we have to do passwordless ssh among datanodes also ???

On Tue, Jun 7, 2011 at 11:15 PM, jagaran das jagaran_...@yahoo.co.inwrote:

 Check two things:

 1. Some of your data node is getting connected, that means password less
 SSH is
 not working within nodes.
 2. Then Clear the Dir where you data is persisted in data nodes and format
 the
 namenode.

 It should definitely work then

 Cheers,
 Jagaran



 
 From: praveenesh kumar praveen...@gmail.com
 To: common-user@hadoop.apache.org
 Sent: Tue, 7 June, 2011 3:14:01 AM
 Subject: Re: NameNode is starting with exceptions whenever its trying to
 start
 datanodes

 But I dnt have any data on my HDFS.. I was having some date before.. but
 now
 I deleted all the files from HDFS..
 I dnt know why datanodes are taking time to start.. I guess because of this
 exception its taking more time to start.

 On Tue, Jun 7, 2011 at 3:34 PM, Steve Loughran ste...@apache.org wrote:

  On 06/07/2011 10:50 AM, praveenesh kumar wrote:
 
  The logs say
 
 
   The ratio of reported blocks 0.9091 has not reached the threshold
 0.9990.
  Safe mode will be turned off automatically.
 
 
 
  not enough datanodes reported in, or they are missing data
 



Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread jagaran das
Sorry I mean Some of your data nodes are not  getting connected




From: jagaran das jagaran_...@yahoo.co.in
To: common-user@hadoop.apache.org
Sent: Tue, 7 June, 2011 10:45:59 AM
Subject: Re: NameNode is starting with exceptions whenever its trying to start 
datanodes

Check two things:

1. Some of your data node is getting connected, that means password less SSH is 
not working within nodes.
2. Then Clear the Dir where you data is persisted in data nodes and format the 
namenode.

It should definitely work then

Cheers,
Jagaran 




From: praveenesh kumar praveen...@gmail.com
To: common-user@hadoop.apache.org
Sent: Tue, 7 June, 2011 3:14:01 AM
Subject: Re: NameNode is starting with exceptions whenever its trying to start 
datanodes

But I dnt have any data on my HDFS.. I was having some date before.. but now
I deleted all the files from HDFS..
I dnt know why datanodes are taking time to start.. I guess because of this
exception its taking more time to start.

On Tue, Jun 7, 2011 at 3:34 PM, Steve Loughran ste...@apache.org wrote:

 On 06/07/2011 10:50 AM, praveenesh kumar wrote:

 The logs say


  The ratio of reported blocks 0.9091 has not reached the threshold 0.9990.
 Safe mode will be turned off automatically.



 not enough datanodes reported in, or they are missing data



Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread praveenesh kumar
Sorry I mean Some of your data nodes are not  getting connected..

So are you sticking with your solution that you are saying to me.. to go for
passwordless ssh for all datanodes..
because for my hadoop.. all datanodes are running fine



On Tue, Jun 7, 2011 at 11:32 PM, jagaran das jagaran_...@yahoo.co.inwrote:

 Sorry I mean Some of your data nodes are not  getting connected



 
 From: jagaran das jagaran_...@yahoo.co.in
 To: common-user@hadoop.apache.org
 Sent: Tue, 7 June, 2011 10:45:59 AM
  Subject: Re: NameNode is starting with exceptions whenever its trying to
 start
 datanodes

 Check two things:

 1. Some of your data node is getting connected, that means password less
 SSH is
 not working within nodes.
 2. Then Clear the Dir where you data is persisted in data nodes and format
 the
 namenode.

 It should definitely work then

 Cheers,
 Jagaran



 
 From: praveenesh kumar praveen...@gmail.com
 To: common-user@hadoop.apache.org
 Sent: Tue, 7 June, 2011 3:14:01 AM
 Subject: Re: NameNode is starting with exceptions whenever its trying to
 start
 datanodes

 But I dnt have any data on my HDFS.. I was having some date before.. but
 now
 I deleted all the files from HDFS..
 I dnt know why datanodes are taking time to start.. I guess because of this
 exception its taking more time to start.

 On Tue, Jun 7, 2011 at 3:34 PM, Steve Loughran ste...@apache.org wrote:

  On 06/07/2011 10:50 AM, praveenesh kumar wrote:
 
  The logs say
 
 
   The ratio of reported blocks 0.9091 has not reached the threshold
 0.9990.
  Safe mode will be turned off automatically.
 
 
 
  not enough datanodes reported in, or they are missing data
 



Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread jagaran das
Yes Correct
Password less SSH between your name node and some of your datanode is not 
working





From: praveenesh kumar praveen...@gmail.com
To: common-user@hadoop.apache.org
Sent: Tue, 7 June, 2011 10:56:08 AM
Subject: Re: NameNode is starting with exceptions whenever its trying to start 
datanodes

1. Some of your data node is getting connected, that means password less
SSH is
not working within nodes.

So you mean that passwordless SSH should be there among datanodes also.
In hadoop we used to do password less SSH from namenode to data nodes
Do we have to do passwordless ssh among datanodes also ???

On Tue, Jun 7, 2011 at 11:15 PM, jagaran das jagaran_...@yahoo.co.inwrote:

 Check two things:

 1. Some of your data node is getting connected, that means password less
 SSH is
 not working within nodes.
 2. Then Clear the Dir where you data is persisted in data nodes and format
 the
 namenode.

 It should definitely work then

 Cheers,
 Jagaran



 
 From: praveenesh kumar praveen...@gmail.com
 To: common-user@hadoop.apache.org
 Sent: Tue, 7 June, 2011 3:14:01 AM
 Subject: Re: NameNode is starting with exceptions whenever its trying to
 start
 datanodes

 But I dnt have any data on my HDFS.. I was having some date before.. but
 now
 I deleted all the files from HDFS..
 I dnt know why datanodes are taking time to start.. I guess because of this
 exception its taking more time to start.

 On Tue, Jun 7, 2011 at 3:34 PM, Steve Loughran ste...@apache.org wrote:

  On 06/07/2011 10:50 AM, praveenesh kumar wrote:
 
  The logs say
 
 
   The ratio of reported blocks 0.9091 has not reached the threshold
 0.9990.
  Safe mode will be turned off automatically.
 
 
 
  not enough datanodes reported in, or they are missing data
 



Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread jagaran das
Cleaning data from data dir of datanode and formatting the name node may help 
you





From: praveenesh kumar praveen...@gmail.com
To: common-user@hadoop.apache.org
Sent: Tue, 7 June, 2011 11:05:03 AM
Subject: Re: NameNode is starting with exceptions whenever its trying to start 
datanodes

Sorry I mean Some of your data nodes are not  getting connected..

So are you sticking with your solution that you are saying to me.. to go for
passwordless ssh for all datanodes..
because for my hadoop.. all datanodes are running fine



On Tue, Jun 7, 2011 at 11:32 PM, jagaran das jagaran_...@yahoo.co.inwrote:

 Sorry I mean Some of your data nodes are not  getting connected



 
 From: jagaran das jagaran_...@yahoo.co.in
 To: common-user@hadoop.apache.org
 Sent: Tue, 7 June, 2011 10:45:59 AM
  Subject: Re: NameNode is starting with exceptions whenever its trying to
 start
 datanodes

 Check two things:

 1. Some of your data node is getting connected, that means password less
 SSH is
 not working within nodes.
 2. Then Clear the Dir where you data is persisted in data nodes and format
 the
 namenode.

 It should definitely work then

 Cheers,
 Jagaran



 
 From: praveenesh kumar praveen...@gmail.com
 To: common-user@hadoop.apache.org
 Sent: Tue, 7 June, 2011 3:14:01 AM
 Subject: Re: NameNode is starting with exceptions whenever its trying to
 start
 datanodes

 But I dnt have any data on my HDFS.. I was having some date before.. but
 now
 I deleted all the files from HDFS..
 I dnt know why datanodes are taking time to start.. I guess because of this
 exception its taking more time to start.

 On Tue, Jun 7, 2011 at 3:34 PM, Steve Loughran ste...@apache.org wrote:

  On 06/07/2011 10:50 AM, praveenesh kumar wrote:
 
  The logs say
 
 
   The ratio of reported blocks 0.9091 has not reached the threshold
 0.9990.
  Safe mode will be turned off automatically.
 
 
 
  not enough datanodes reported in, or they are missing data
 



Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread praveenesh kumar
Dude.. passwordless ssh between my namenode and datanode is working all
fine...!!!

My question is ---

*Are you talking about passwordless ssh between datanodes *
**
 or

*Are you talking about password less ssh between datanodes and namenode *
**
Because if you are talking about 2nd case.. than that thing is working
fine...because I already mentioned it that all my datanodes in hadoop are
working fine..!!! I can see all those datanodes using hadoop fsck / as
well as in hdfs web UI also..



On Tue, Jun 7, 2011 at 11:35 PM, jagaran das jagaran_...@yahoo.co.inwrote:

 Yes Correct
 Password less SSH between your name node and some of your datanode is not
 working




 
 From: praveenesh kumar praveen...@gmail.com
 To: common-user@hadoop.apache.org
 Sent: Tue, 7 June, 2011 10:56:08 AM
  Subject: Re: NameNode is starting with exceptions whenever its trying to
 start
 datanodes

 1. Some of your data node is getting connected, that means password less
 SSH is
 not working within nodes.

 So you mean that passwordless SSH should be there among datanodes also.
 In hadoop we used to do password less SSH from namenode to data nodes
 Do we have to do passwordless ssh among datanodes also ???

 On Tue, Jun 7, 2011 at 11:15 PM, jagaran das jagaran_...@yahoo.co.in
 wrote:

  Check two things:
 
  1. Some of your data node is getting connected, that means password less
  SSH is
  not working within nodes.
  2. Then Clear the Dir where you data is persisted in data nodes and
 format
  the
  namenode.
 
  It should definitely work then
 
  Cheers,
  Jagaran
 
 
 
  
  From: praveenesh kumar praveen...@gmail.com
  To: common-user@hadoop.apache.org
  Sent: Tue, 7 June, 2011 3:14:01 AM
  Subject: Re: NameNode is starting with exceptions whenever its trying to
  start
  datanodes
 
  But I dnt have any data on my HDFS.. I was having some date before.. but
  now
  I deleted all the files from HDFS..
  I dnt know why datanodes are taking time to start.. I guess because of
 this
  exception its taking more time to start.
 
  On Tue, Jun 7, 2011 at 3:34 PM, Steve Loughran ste...@apache.org
 wrote:
 
   On 06/07/2011 10:50 AM, praveenesh kumar wrote:
  
   The logs say
  
  
The ratio of reported blocks 0.9091 has not reached the threshold
  0.9990.
   Safe mode will be turned off automatically.
  
  
  
   not enough datanodes reported in, or they are missing data
  
 



Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread praveenesh kumar
how shall I clean my data dir ???
Cleaning data dir .. u mean to say is deleting all files from hdfs ???..

is there any special command to clean all the datanodes in one step ???

On Tue, Jun 7, 2011 at 11:46 PM, jagaran das jagaran_...@yahoo.co.inwrote:

 Cleaning data from data dir of datanode and formatting the name node may
 help
 you




 
 From: praveenesh kumar praveen...@gmail.com
 To: common-user@hadoop.apache.org
 Sent: Tue, 7 June, 2011 11:05:03 AM
  Subject: Re: NameNode is starting with exceptions whenever its trying to
 start
 datanodes

 Sorry I mean Some of your data nodes are not  getting connected..

 So are you sticking with your solution that you are saying to me.. to go
 for
 passwordless ssh for all datanodes..
 because for my hadoop.. all datanodes are running fine



 On Tue, Jun 7, 2011 at 11:32 PM, jagaran das jagaran_...@yahoo.co.in
 wrote:

  Sorry I mean Some of your data nodes are not  getting connected
 
 
 
  
  From: jagaran das jagaran_...@yahoo.co.in
  To: common-user@hadoop.apache.org
  Sent: Tue, 7 June, 2011 10:45:59 AM
   Subject: Re: NameNode is starting with exceptions whenever its trying to
  start
  datanodes
 
  Check two things:
 
  1. Some of your data node is getting connected, that means password less
  SSH is
  not working within nodes.
  2. Then Clear the Dir where you data is persisted in data nodes and
 format
  the
  namenode.
 
  It should definitely work then
 
  Cheers,
  Jagaran
 
 
 
  
  From: praveenesh kumar praveen...@gmail.com
  To: common-user@hadoop.apache.org
  Sent: Tue, 7 June, 2011 3:14:01 AM
  Subject: Re: NameNode is starting with exceptions whenever its trying to
  start
  datanodes
 
  But I dnt have any data on my HDFS.. I was having some date before.. but
  now
  I deleted all the files from HDFS..
  I dnt know why datanodes are taking time to start.. I guess because of
 this
  exception its taking more time to start.
 
  On Tue, Jun 7, 2011 at 3:34 PM, Steve Loughran ste...@apache.org
 wrote:
 
   On 06/07/2011 10:50 AM, praveenesh kumar wrote:
  
   The logs say
  
  
The ratio of reported blocks 0.9091 has not reached the threshold
  0.9990.
   Safe mode will be turned off automatically.
  
  
  
   not enough datanodes reported in, or they are missing data
  
 



Re: NameNode is starting with exceptions whenever its trying to start datanodes

2011-06-07 Thread jagaran das
I mean removing rm -rf * in the datanode dir

See this are debugging step that i followed





From: praveenesh kumar praveen...@gmail.com
To: common-user@hadoop.apache.org
Sent: Tue, 7 June, 2011 11:19:50 AM
Subject: Re: NameNode is starting with exceptions whenever its trying to start 
datanodes

how shall I clean my data dir ???
Cleaning data dir .. u mean to say is deleting all files from hdfs ???..

is there any special command to clean all the datanodes in one step ???

On Tue, Jun 7, 2011 at 11:46 PM, jagaran das jagaran_...@yahoo.co.inwrote:

 Cleaning data from data dir of datanode and formatting the name node may
 help
 you




 
 From: praveenesh kumar praveen...@gmail.com
 To: common-user@hadoop.apache.org
 Sent: Tue, 7 June, 2011 11:05:03 AM
  Subject: Re: NameNode is starting with exceptions whenever its trying to
 start
 datanodes

 Sorry I mean Some of your data nodes are not  getting connected..

 So are you sticking with your solution that you are saying to me.. to go
 for
 passwordless ssh for all datanodes..
 because for my hadoop.. all datanodes are running fine



 On Tue, Jun 7, 2011 at 11:32 PM, jagaran das jagaran_...@yahoo.co.in
 wrote:

  Sorry I mean Some of your data nodes are not  getting connected
 
 
 
  
  From: jagaran das jagaran_...@yahoo.co.in
  To: common-user@hadoop.apache.org
  Sent: Tue, 7 June, 2011 10:45:59 AM
   Subject: Re: NameNode is starting with exceptions whenever its trying to
  start
  datanodes
 
  Check two things:
 
  1. Some of your data node is getting connected, that means password less
  SSH is
  not working within nodes.
  2. Then Clear the Dir where you data is persisted in data nodes and
 format
  the
  namenode.
 
  It should definitely work then
 
  Cheers,
  Jagaran
 
 
 
  
  From: praveenesh kumar praveen...@gmail.com
  To: common-user@hadoop.apache.org
  Sent: Tue, 7 June, 2011 3:14:01 AM
  Subject: Re: NameNode is starting with exceptions whenever its trying to
  start
  datanodes
 
  But I dnt have any data on my HDFS.. I was having some date before.. but
  now
  I deleted all the files from HDFS..
  I dnt know why datanodes are taking time to start.. I guess because of
 this
  exception its taking more time to start.
 
  On Tue, Jun 7, 2011 at 3:34 PM, Steve Loughran ste...@apache.org
 wrote:
 
   On 06/07/2011 10:50 AM, praveenesh kumar wrote:
  
   The logs say
  
  
The ratio of reported blocks 0.9091 has not reached the threshold
  0.9990.
   Safe mode will be turned off automatically.
  
  
  
   not enough datanodes reported in, or they are missing data
  
 



Re: Namenode not starting up

2010-07-22 Thread Eason.Lee
did u format it?

2010/7/22 Denim Live denim.l...@yahoo.com

 Hi all,

 I just restarted my cluster and now the namenode is not starting up. I get
 the
 following error:

 10/07/22 09:14:40 INFO namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = DenimLive/188.74.76.201
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 0.19.2
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/b
 ranch-0.19 -r 789657; compiled by 'root' on Tue Jun 30 12:40:50 EDT 2009
 /
 10/07/22 09:14:40 INFO metrics.RpcMetrics: Initializing RPC Metrics with
 hostNam
 e=NameNode, port=9100
 10/07/22 09:14:40 INFO namenode.NameNode: Namenode up at:
 127.0.0.1/127.0.0.1:91
 00
 10/07/22 09:14:40 INFO jvm.JvmMetrics: Initializing JVM Metrics with
 processName
 =NameNode, sessionId=null
 10/07/22 09:14:40 INFO metrics.NameNodeMetrics: Initializing
 NameNodeMeterics us
 ing context object:org.apache.hadoop.metrics.spi.NullContext
 10/07/22 09:14:41 INFO namenode.FSNamesystem:
 fsOwner=denimlive\denim,None,Use
 rs
 10/07/22 09:14:41 INFO namenode.FSNamesystem: supergroup=supergroup
 10/07/22 09:14:41 INFO namenode.FSNamesystem: isPermissionEnabled=true
 10/07/22 09:14:41 INFO metrics.FSNamesystemMetrics: Initializing
 FSNamesystemMet
 rics using context object:org.apache.hadoop.metrics.spi.NullContext
 10/07/22 09:14:41 INFO namenode.FSNamesystem: Registered
 FSNamesystemStatusMBean
 10/07/22 09:14:41 ERROR namenode.FSNamesystem: FSNamesystem initialization
 faile
 d.
 java.io.IOException: Image file is not found in
 [C:\tmp\hadoop-Denim\dfs\name]
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja
 va:769)
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(
 FSImage.java:352)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDire
 ctory.java:87)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSName
 system.java:309)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesyst
 em.java:288)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j
 ava:163)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
 208)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
 194)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
 de.java:859)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:86
 8)
 10/07/22 09:14:41 INFO ipc.Server: Stopping server on 9100
 10/07/22 09:14:41 ERROR namenode.NameNode: java.io.IOException: Image file
 is no
 t found in [C:\tmp\hadoop-Denim\dfs\name]
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja
 va:769)
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(
 FSImage.java:352)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDire
 ctory.java:87)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSName
 system.java:309)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesyst
 em.java:288)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j
 ava:163)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
 208)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
 194)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
 de.java:859)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:86
 8)
 10/07/22 09:14:41 INFO namenode.NameNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down NameNode at DenimLive/188.74.76.201
 /



 I m running Hadoop on Windows in pseudo-distributed mode. The error says
 that it
 cannot locate the name node image file at C:\tmp\hadoop-Denim\dfs\name
 although
 it is there when I check. Can anyone plz help how to resolve this problem?


 Thanks





Re: Namenode not starting up

2010-07-22 Thread Denim Live


No, I didn't. I had just simply shut it down and after sometime, started again. 
But it refused to start the namenode.

 


From: Eason.Lee leongf...@gmail.com
To: common-user@hadoop.apache.org
Sent: Thu, July 22, 2010 10:15:11 AM
Subject: Re: Namenode not starting up

did u format it?

2010/7/22 Denim Live denim.l...@yahoo.com

 Hi all,

 I just restarted my cluster and now the namenode is not starting up. I get
 the
 following error:

 10/07/22 09:14:40 INFO namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:  host = DenimLive/188.74.76.201
 STARTUP_MSG:  args = []
 STARTUP_MSG:  version = 0.19.2
 STARTUP_MSG:  build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/b
 ranch-0.19 -r 789657; compiled by 'root' on Tue Jun 30 12:40:50 EDT 2009
 /
 10/07/22 09:14:40 INFO metrics.RpcMetrics: Initializing RPC Metrics with
 hostNam
 e=NameNode, port=9100
 10/07/22 09:14:40 INFO namenode.NameNode: Namenode up at:
 127.0.0.1/127.0.0.1:91
 00
 10/07/22 09:14:40 INFO jvm.JvmMetrics: Initializing JVM Metrics with
 processName
 =NameNode, sessionId=null
 10/07/22 09:14:40 INFO metrics.NameNodeMetrics: Initializing
 NameNodeMeterics us
 ing context object:org.apache.hadoop.metrics.spi.NullContext
 10/07/22 09:14:41 INFO namenode.FSNamesystem:
 fsOwner=denimlive\denim,None,Use
 rs
 10/07/22 09:14:41 INFO namenode.FSNamesystem: supergroup=supergroup
 10/07/22 09:14:41 INFO namenode.FSNamesystem: isPermissionEnabled=true
 10/07/22 09:14:41 INFO metrics.FSNamesystemMetrics: Initializing
 FSNamesystemMet
 rics using context object:org.apache.hadoop.metrics.spi.NullContext
 10/07/22 09:14:41 INFO namenode.FSNamesystem: Registered
 FSNamesystemStatusMBean
 10/07/22 09:14:41 ERROR namenode.FSNamesystem: FSNamesystem initialization
 faile
 d.
 java.io.IOException: Image file is not found in
 [C:\tmp\hadoop-Denim\dfs\name]
        at
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja
 va:769)
        at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(
 FSImage.java:352)
        at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDire
 ctory.java:87)
        at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSName
 system.java:309)
        at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesyst
 em.java:288)
        at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j
 ava:163)
        at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
 208)
        at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
 194)
        at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
 de.java:859)
        at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:86
 8)
 10/07/22 09:14:41 INFO ipc.Server: Stopping server on 9100
 10/07/22 09:14:41 ERROR namenode.NameNode: java.io.IOException: Image file
 is no
 t found in [C:\tmp\hadoop-Denim\dfs\name]
        at
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja
 va:769)
        at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(
 FSImage.java:352)
        at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDire
 ctory.java:87)
        at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSName
 system.java:309)
        at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesyst
 em.java:288)
        at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j
 ava:163)
        at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
 208)
        at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
 194)
        at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
 de.java:859)
        at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:86
 8)
 10/07/22 09:14:41 INFO namenode.NameNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down NameNode at DenimLive/188.74.76.201
 /



 I m running Hadoop on Windows in pseudo-distributed mode. The error says
 that it
 cannot locate the name node image file at C:\tmp\hadoop-Denim\dfs\name
 although
 it is there when I check. Can anyone plz help how to resolve this problem?


 Thanks






  

Re: Namenode not starting up

2010-07-22 Thread Khaled BEN BAHRI

Hi,

you can check log's files to know more details on error
and may be you should to re-format it and check if there are any  
process working on the port that should namenode work on it.


hope this help you:)

Best regards
khaled

Quoting Denim Live denim.l...@yahoo.com:




No, I didn't. I had just simply shut it down and after sometime,   
started again.

But it refused to start the namenode.

 


From: Eason.Lee leongf...@gmail.com
To: common-user@hadoop.apache.org
Sent: Thu, July 22, 2010 10:15:11 AM
Subject: Re: Namenode not starting up

did u format it?

2010/7/22 Denim Live denim.l...@yahoo.com


Hi all,

I just restarted my cluster and now the namenode is not starting up. I get
the
following error:

10/07/22 09:14:40 INFO namenode.NameNode: STARTUP_MSG:
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:  host = DenimLive/188.74.76.201
STARTUP_MSG:  args = []
STARTUP_MSG:  version = 0.19.2
STARTUP_MSG:  build =
https://svn.apache.org/repos/asf/hadoop/common/branches/b
ranch-0.19 -r 789657; compiled by 'root' on Tue Jun 30 12:40:50 EDT 2009
/
10/07/22 09:14:40 INFO metrics.RpcMetrics: Initializing RPC Metrics with
hostNam
e=NameNode, port=9100
10/07/22 09:14:40 INFO namenode.NameNode: Namenode up at:
127.0.0.1/127.0.0.1:91
00
10/07/22 09:14:40 INFO jvm.JvmMetrics: Initializing JVM Metrics with
processName
=NameNode, sessionId=null
10/07/22 09:14:40 INFO metrics.NameNodeMetrics: Initializing
NameNodeMeterics us
ing context object:org.apache.hadoop.metrics.spi.NullContext
10/07/22 09:14:41 INFO namenode.FSNamesystem:
fsOwner=denimlive\denim,None,Use
rs
10/07/22 09:14:41 INFO namenode.FSNamesystem: supergroup=supergroup
10/07/22 09:14:41 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/07/22 09:14:41 INFO metrics.FSNamesystemMetrics: Initializing
FSNamesystemMet
rics using context object:org.apache.hadoop.metrics.spi.NullContext
10/07/22 09:14:41 INFO namenode.FSNamesystem: Registered
FSNamesystemStatusMBean
10/07/22 09:14:41 ERROR namenode.FSNamesystem: FSNamesystem initialization
faile
d.
java.io.IOException: Image file is not found in
[C:\tmp\hadoop-Denim\dfs\name]
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja
va:769)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(
FSImage.java:352)
        at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDire
ctory.java:87)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSName
system.java:309)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesyst
em.java:288)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j
ava:163)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
208)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
194)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
de.java:859)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:86
8)
10/07/22 09:14:41 INFO ipc.Server: Stopping server on 9100
10/07/22 09:14:41 ERROR namenode.NameNode: java.io.IOException: Image file
is no
t found in [C:\tmp\hadoop-Denim\dfs\name]
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja
va:769)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(
FSImage.java:352)
        at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDire
ctory.java:87)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSName
system.java:309)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesyst
em.java:288)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j
ava:163)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
208)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:
194)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
de.java:859)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:86
8)
10/07/22 09:14:41 INFO namenode.NameNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at DenimLive/188.74.76.201
/



I m running Hadoop on Windows in pseudo-distributed mode. The error says
that it
cannot locate the name node image file at C:\tmp\hadoop-Denim\dfs\name
although
it is there when I check. Can anyone plz help how to resolve this problem?


Thanks