This is a good time to remind folks that the namenode can write to multiple 
directories, including one over a network filesystem or SAN  so that you always 
have a fresh copy. :)

On May 13, 2010, at 8:05 AM, Eric Sammer wrote:

> You can use the copy of fsimage and the editlog from the SNN to
> recover. Remember that it will be (roughly) an hour old. The process
> for recovery is to copy the fsimage and editlog to a new machine,
> place them in the dfs.name.dir/current directory, and start all the
> daemons. It's worth practicing this type of procedure before trying it
> on a production cluster. More importantly, it's worth practicing this
> *before* you need it on a production cluster.
> 
> On Thu, May 13, 2010 at 5:01 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>> Hi all,
>> 
>> I wonder is it enough for recovering hadoop cluster by just coping the
>> meta data from SecondaryNameNode to the new master node ? Do I any
>> need do any other stuffs ?
>> Thanks for any help.
>> 
>> 
>> 
>> --
>> Best Regards
>> 
>> Jeff Zhang
>> 
> 
> 
> 
> -- 
> Eric Sammer
> phone: +1-917-287-2675
> twitter: esammer
> data: www.cloudera.com

Reply via email to