Hi,
Glad to know it helped.
If you need to get your cluster up and running quickly, you can manipulate the
parameter dfs.namenode.threshold.percent. If you set it to 0, NN will not enter
safe mode.
Amogh
On 1/19/10 12:39 PM, prasenjit mukherjee pmukher...@quattrowireless.com
wrote:
That was
hadoop fs -rmr /op
That command always fails. I am trying to run sequential hadoop jobs.
After the first run all subsequent runs fail while cleaning up ( aka
removing the hadoop dir created by previous run ). What can I do to
avoid this ?
here is my hadoop version :
# hadoop version
Hadoop
Can you try with dfs/ without quotes?If using pig to run jobs you can use rmf
within your script(again w/o quotes) to force remove and avoid error if
file/dir not present.Or if doing this inside hadoop job, you can use
FileSystem/FileStatus to delete directories.HTH.
Cheers,
/R
On 1/19/10
Hmmm. I am actually running it from a batch file. Is hadoop fs -rmr
not that stable compared to pig's rm OR hadoop's FileSystem ?
Let me try your suggestion by writing a cleanup script in pig.
-Thanks,
Prasen
On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi rekha...@yahoo-inc.com wrote:
Can you
Hi,
When NN is in safe mode, you get a read-only view of the hadoop file system. (
since NN is reconstructing its image of FS )
Use hadoop dfsadmin -safemode get to check if in safe mode.
hadoop dfsadmin -safemode leave to leave safe mode forcefully. Or use hadoop
dfsadmin -safemode wait to
They are only alternatives. hadoop fs -rmr works well for me. I do not exactly
know what error it gives you or how the call is invoked.On batch , lets say on
perl below should work fine
$cmd = hadoop fs -rmr /op;
system($cmd);
Cheers,
/R
On 1/19/10 10:31 AM, prasenjit mukherjee
That was exactly the reason. Thanks a bunch.
On Tue, Jan 19, 2010 at 12:24 PM, Mafish Liu maf...@gmail.com wrote:
2010/1/19 prasenjit mukherjee pmukher...@quattrowireless.com:
I run hadoop fs -rmr .. immediately after start-all.sh Does the
namenode always start in safemode and after