Not sure why but this does not work for me.  I am running 0.18.2.  I ran
hadoop dfsadmin -refreshNodes after removing the decommissioned node from
the exclude file.  It still shows up as a dead node.  I also removed it from
the slaves file and ran the refresh nodes command again.  It still shows up
as a dead node after that.

I am going to upgrade to 0.19.0 to see if it makes any difference.

Bill

On Tue, Jan 27, 2009 at 7:01 PM, paul <paulg...@gmail.com> wrote:

> Once the nodes are listed as dead, if you still have the host names in your
> conf/exclude file, remove the entries and then run hadoop dfsadmin
> -refreshNodes.
>
>
> This works for us on our cluster.
>
>
>
> -paul
>
>
> On Tue, Jan 27, 2009 at 5:08 PM, Bill Au <bill.w...@gmail.com> wrote:
>
> > I was able to decommission a datanode successfully without having to stop
> > my
> > cluster.  But I noticed that after a node has been decommissioned, it
> shows
> > up as a dead node in the web base interface to the namenode (ie
> > dfshealth.jsp).  My cluster is relatively small and losing a datanode
> will
> > have performance impact.  So I have a need to monitor the health of my
> > cluster and take steps to revive any dead datanode in a timely fashion.
>  So
> > is there any way to altogether "get rid of" any decommissioned datanode
> > from
> > the web interace of the namenode?  Or is there a better way to monitor
> the
> > health of the cluster?
> >
> > Bill
> >
>

Reply via email to