Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

2010-01-19 Thread Amogh Vasekar
Hi, Glad to know it helped. If you need to get your cluster up and running quickly, you can manipulate the parameter dfs.namenode.threshold.percent. If you set it to 0, NN will not enter safe mode. Amogh On 1/19/10 12:39 PM, "prasenjit mukherjee" wrote: That was exactly the reason. Thanks

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

2010-01-18 Thread prasenjit mukherjee
That was exactly the reason. Thanks a bunch. On Tue, Jan 19, 2010 at 12:24 PM, Mafish Liu wrote: > 2010/1/19 prasenjit mukherjee : >>  I run "hadoop fs -rmr .." immediately after start-all.sh    Does the >> namenode always start in safemode and after sometime switches to >> normal mode ? If that

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

2010-01-18 Thread Mafish Liu
2010/1/19 prasenjit mukherjee : >  I run "hadoop fs -rmr .." immediately after start-all.sh    Does the > namenode always start in safemode and after sometime switches to > normal mode ? If that is the problem then your suggestion of waiting > might work. Lemme check. This is the point. Namenode w

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

2010-01-18 Thread prasenjit mukherjee
I run "hadoop fs -rmr .." immediately after start-all.shDoes the namenode always start in safemode and after sometime switches to normal mode ? If that is the problem then your suggestion of waiting might work. Lemme check. -Thanks for the pointer. Prasen On Tue, Jan 19, 2010 at 10:47 AM, Am

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

2010-01-18 Thread Rekha Joshi
They are only alternatives. hadoop fs -rmr works well for me. I do not exactly know what error it gives you or how the call is invoked.On batch , lets say on perl below should work fine $cmd = "hadoop fs -rmr /op"; system($cmd); Cheers, /R On 1/19/10 10:31 AM, "prasenjit mukherjee" wrote: Hm

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

2010-01-18 Thread Amogh Vasekar
Hi, When NN is in safe mode, you get a read-only view of the hadoop file system. ( since NN is reconstructing its image of FS ) Use "hadoop dfsadmin -safemode get" to check if in safe mode. "hadoop dfsadmin -safemode leave" to leave safe mode forcefully. Or use "hadoop dfsadmin -safemode wait" t

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

2010-01-18 Thread prasenjit mukherjee
Hmmm. I am actually running it from a batch file. Is "hadoop fs -rmr" not that stable compared to pig's rm OR hadoop's FileSystem ? Let me try your suggestion by writing a cleanup script in pig. -Thanks, Prasen On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi wrote: > Can you try with dfs/ withou

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

2010-01-18 Thread Rekha Joshi
Can you try with dfs/ without quotes?If using pig to run jobs you can use rmf within your script(again w/o quotes) to force remove and avoid error if file/dir not present.Or if doing this inside hadoop job, you can use FileSystem/FileStatus to delete directories.HTH. Cheers, /R On 1/19/10 10:15

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

2010-01-18 Thread Mark Kerzner
A few things may help - delete individual files under /op - open another terminal I don't know why, but it helps, and then the error goes away On Mon, Jan 18, 2010 at 10:45 PM, prasenjit mukherjee wrote: > "hadoop fs -rmr /op" > > That command always fails. I am trying to run sequential h

rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

2010-01-18 Thread prasenjit mukherjee
"hadoop fs -rmr /op" That command always fails. I am trying to run sequential hadoop jobs. After the first run all subsequent runs fail while cleaning up ( aka removing the hadoop dir created by previous run ). What can I do to avoid this ? here is my hadoop version : # hadoop version Hadoop 0.20