For future reference,
$ bin/hadoop dfsadmin safemode -leave
will also just cause HDFS to exit safemode forcibly.

- Aaron

On Wed, Aug 5, 2009 at 1:04 AM, Amandeep Khurana <ama...@gmail.com> wrote:

> Two alternatives:
>
> 1. Do bin/hadoop namenode -format. That'll format the metadata and you can
> start afresh.
>
> 2. If that doesnt work, manually go and delete everything that resides in
> the directories to which you've pointed your Namenode and Datanodes to
> store
> their stuff in.
>
>
>
>
> On Tue, Aug 4, 2009 at 4:10 PM, Phil Whelan <phil...@gmail.com> wrote:
>
> > Hi,
> >
> > In setting up my cluster and brought a few machines up and down. I did
> > have some data in which I moved to Trash. Now that data is not 100%
> > available, which is fine, because I didn't want it.
> > But now I'm stuck in "Safe Mode", because it cannot find the data. I
> > cannot purge the Trash because it's in read-only due to Safe Mode.
> >
> >   Safe mode is ON.
> >   The ratio of reported blocks 0.9931 has not reached the threshold
> 0.9990.
> >   Safe mode will be turned off automatically.
> >   459 files and directories, 583 blocks = 1042 total. Heap Size is
> > 7.8 MB / 992.31 MB (0%)
> >
> > I want to just want format the entire HDFS filesystem. I have nothing
> > I need in there. How can I do this?
> >
> > Phil
> >
>

Reply via email to