Basically, if the datanodes crushed or did not stooped gracefully then it is
not a big deal as the data is still inside them and the location of where are
all the block files are is on the namenode (metadata).
Thus for that, I won't be worry and you can always kill them with kill command
based o
name not a variable.
-Original Message-
From: Harsh J [mailto:ha...@cloudera.com]
Sent: Monday, August 27, 2012 12:30 AM
To: user@hadoop.apache.org
Subject: Re: namenode not starting
Abhay,
On Mon, Aug 27, 2012 at 11:19 AM, Abhay Ratnaparkhi
wrote:
> Thank you Harsh,
>
> I hav
Abhay,
On Mon, Aug 27, 2012 at 11:19 AM, Abhay Ratnaparkhi
wrote:
> Thank you Harsh,
>
> I have set "dfs.name.dir" explicitly. Still don't know why the data loss has
> happened.
>
>
> dfs.name.dir
> /wsadfs/${host.name}/name
> Determines where on the local filesystem the DFS name node
>
Hello Abhay,
Along with dfs.name.dir, also include dfs.data.dir in hdfs-site.xml.
On Monday, August 27, 2012, Abhay Ratnaparkhi
wrote:
> Thank you Harsh,
> I have set "dfs.name.dir" explicitly. Still don't know why the data loss
has happened.
>
> dfs.name.dir
> /wsadfs/${host.name}/name
Thank you Harsh,
I have set "dfs.name.dir" explicitly. Still don't know why the data loss
has happened.
dfs.name.dir
/wsadfs/${host.name}/name
Determines where on the local filesystem the DFS name node
should store the name table. If this is a comma-delimited list
of directori
Abhay,
I suspect that if you haven't set your dfs.name.dir explicitly, then
you haven't set fs.checkpoint.dir either, and since both use
hadoop.tmp.dir paths, you may have lost your data completely and there
is no recovery possible now.
On Fri, Aug 24, 2012 at 1:10 PM, Abhay Ratnaparkhi
wrote:
>
aybe other people will try to limit me but I don't limit myself"
> From: lle...@ddn.com
> To: user@hadoop.apache.org
> Subject: RE: namenode not starting
> Date: Fri, 24 Aug 2012 16:38:01 +
>
> Abhay,
> Sounds like your namenode cannot find the metadata infor
om: Håvard Wahl Kongsgård [mailto:haavard.kongsga...@gmail.com]
Sent: Friday, August 24, 2012 5:38 AM
To: user@hadoop.apache.org
Subject: Re: namenode not starting
You should start with a reboot of the system.
A lesson to everyone, this is exactly why you should have a secondary name node
(http://
You should start with a reboot of the system.
A lesson to everyone, this is exactly why you should have a secondary
name node
(http://wiki.apache.org/hadoop/FAQ#What_is_the_purpose_of_the_secondary_name-node.3F)
and run the namenode a mirrored RAID-5/10 disk.
-Håvard
On Fri, Aug 24, 2012 at
Hello,
I was using cluster for long time and not formatted the namenode.
I ran bin/stop-all.sh and bin/start-all.sh scripts only.
I am using NFS for dfs.name.dir.
hadoop.tmp.dir is a /tmp directory. I've not restarted the OS. Any way to
recover the data?
Thanks,
Abhay
On Fri, Aug 24, 2012 at 1
Hi Abhay
What is the value for hadoop.tmp.dir or dfs.name.dir . If it was set to /tmp
the contents would be deleted on a OS restart. You need to change this location
before you start your NN.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: Abhay Ratna
hi,
Have u rubn the command namenode -format???
Thanks & regards ,
Vivek
On Fri, Aug 24, 2012 at 12:58 PM, Abhay Ratnaparkhi <
abhay.ratnapar...@gmail.com> wrote:
> Hello,
>
> I had a running hadoop cluster.
> I restarted it and after that namenode is unable to start. I am getting
> error saying
did you run the command bin/hadoop namenode -format before starting
the namenode ?
On Fri, Aug 24, 2012 at 12:58 PM, Abhay Ratnaparkhi
wrote:
> Hello,
>
> I had a running hadoop cluster.
> I restarted it and after that namenode is unable to start. I am getting
> error saying that it's not formatt
13 matches
Mail list logo