Do you mean the file specified by the 'dfs.hosts' parameter? That is not
currently set in my configuration (the hosts are only specified in the
slaves file).

-Chris

On Fri, Jun 8, 2012 at 11:56 AM, Serge Blazhiyevskyy <
serge.blazhiyevs...@nice.com> wrote:

> Your nodes need to be in include and exclude file in the same time
>
>
> Do you use both files?
>
> On 6/8/12 11:46 AM, "Chris Grier" <gr...@imchris.org> wrote:
>
> >Hello,
> >
> >I'm in the trying to figure out how to decommission data nodes. Here's
> >what
> >I do:
> >
> >In hdfs-site.xml I have:
> >
> ><property>
> >    <name>dfs.hosts.exclude</name>
> >    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
> ></property>
> >
> >Add to exclude file:
> >
> >host1
> >host2
> >
> >Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the two
> >nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
> >nothing in the Decommissioning Nodes list). If I look at the datanode logs
> >running on host1 or host2, I still see blocks being copied in and it does
> >not appear that any additional replication was happening.
> >
> >What am I missing during the decommission process?
> >
> >-Chris
>
>

Reply via email to