That was exactly the reason. Thanks  a bunch.

On Tue, Jan 19, 2010 at 12:24 PM, Mafish Liu <maf...@gmail.com> wrote:
> 2010/1/19 prasenjit mukherjee <pmukher...@quattrowireless.com>:
>>  I run "hadoop fs -rmr .." immediately after start-all.sh    Does the
>> namenode always start in safemode and after sometime switches to
>> normal mode ? If that is the problem then your suggestion of waiting
>> might work. Lemme check.
>
> This is the point. Namenode will enter safemode on starting to gather
> metadata information of files, and then switch to normal mode. The
> time spent in safemode depends one the data scale in your HDFS.
>>
>> -Thanks for the pointer.
>> Prasen
>>
>> On Tue, Jan 19, 2010 at 10:47 AM, Amogh Vasekar <am...@yahoo-inc.com> wrote:
>>> Hi,
>>> When NN is in safe mode, you get a read-only view of the hadoop file 
>>> system. ( since NN is reconstructing its image of FS )
>>> Use  "hadoop dfsadmin -safemode get" to check if in safe mode.
>>> "hadoop dfsadmin -safemode leave" to leave safe mode forcefully. Or use 
>>> "hadoop dfsadmin -safemode wait" to block till NN leaves by itself.
>>>
>>> Amogh
>>>
>>>
>>> On 1/19/10 10:31 AM, "prasenjit mukherjee" <prasen....@gmail.com> wrote:
>>>
>>> Hmmm.  I am actually running it from a batch file. Is "hadoop fs -rmr"
>>> not that stable compared to pig's rm OR hadoop's FileSystem ?
>>>
>>> Let me try your suggestion by writing a cleanup script in pig.
>>>
>>> -Thanks,
>>> Prasen
>>>
>>> On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi <rekha...@yahoo-inc.com> 
>>> wrote:
>>>> Can you try with dfs/ without quotes?If using pig to run jobs you can use 
>>>> rmf within your script(again w/o quotes) to force remove and avoid error 
>>>> if file/dir not present.Or if doing this inside hadoop job, you can use 
>>>> FileSystem/FileStatus to delete directories.HTH.
>>>> Cheers,
>>>> /R
>>>>
>>>> On 1/19/10 10:15 AM, "prasenjit mukherjee" <prasen....@gmail.com> wrote:
>>>>
>>>> "hadoop fs -rmr /op"
>>>>
>>>> That command always fails. I am trying to run sequential hadoop jobs.
>>>> After the first run all subsequent runs fail while cleaning up ( aka
>>>> removing the hadoop dir created by previous run ). What can I do to
>>>> avoid this ?
>>>>
>>>> here is my hadoop version :
>>>> # hadoop version
>>>> Hadoop 0.20.0
>>>> Subversion 
>>>> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20
>>>> -r 763504
>>>> Compiled by ndaley on Thu Apr  9 05:18:40 UTC 2009
>>>>
>>>> Any help is greatly appreciated.
>>>>
>>>> -Prasen
>>>>
>>>>
>>>
>>>
>>
>
>
>
> --
> maf...@gmail.com
>

Reply via email to