It should automatically start writing NN data to that dir. If its still
empty, you should check the NN logs and permissions on the backup dir.

On Mon, Jun 6, 2011 at 10:21 AM, Mark <static.void....@gmail.com> wrote:

> Thank you. I added another directory the following configuration and
> restarted my cluster:
>
> <property>
> <name>dfs.name.dir</name>
> <value>/var/hadoop/dfs/name,/var/hadoop/dfs/name.backup</value>
> </property>
>
> However the name.backup directory is empty. Is there anything I need to do
> to tell it to "backup"?
>
> Thanks
>
>
> On 6/5/11 12:44 AM, sulabh choudhury wrote:
>
>> Hey Mark,
>>
>> If you add more than one directory (comma separated) in the variable
>> "dfs.name.dir" it would automatically be copied in all those location.
>>
>> On Sat, Jun 4, 2011 at 10:14 AM, Mark<static.void....@gmail.com>  wrote:
>>
>>  How would I go backing up our namenode data? I set up the
>>> secondarynamenode
>>> on a separate physical machine but just as a backup I would like to save
>>> the
>>> namenode data.
>>>
>>> Thanks
>>>
>>>
>>>
>>
>
>


-- 

-- 
Thanks and Regards,
Sulabh Choudhury

Reply via email to