Re: decommissioning datanodes

2012-06-08 Thread Chris Grier
Thanks, this seems to work now.

Note that the parameter is 'dfs.hosts' instead of 'dfs.hosts.include'.
(Also, the normal caveats like hostnames are case sensitive).

-Chris

On Fri, Jun 8, 2012 at 12:19 PM, Serge Blazhiyevskyy <
serge.blazhiyevs...@nice.com> wrote:

> Your config should be something like this:
>
> >
> >dfs.hosts.exclude
> >/opt/hadoop/hadoop-1.0.0/conf/exclude
> >
>
> >
> >dfs.hosts.include
> >/opt/hadoop/hadoop-1.0.0/conf/include
> >
>
>
>
> >
> >Add to exclude file:
> >
> >host1
> >host2
> >
>
>
>
> Add to include file
> >host1
> >host2
> Plus the rest of the nodes
>
>
>
>
> On 6/8/12 12:15 PM, "Chris Grier"  wrote:
>
> >Do you mean the file specified by the 'dfs.hosts' parameter? That is not
> >currently set in my configuration (the hosts are only specified in the
> >slaves file).
> >
> >-Chris
> >
> >On Fri, Jun 8, 2012 at 11:56 AM, Serge Blazhiyevskyy <
> >serge.blazhiyevs...@nice.com> wrote:
> >
> >> Your nodes need to be in include and exclude file in the same time
> >>
> >>
> >> Do you use both files?
> >>
> >> On 6/8/12 11:46 AM, "Chris Grier"  wrote:
> >>
> >> >Hello,
> >> >
> >> >I'm in the trying to figure out how to decommission data nodes. Here's
> >> >what
> >> >I do:
> >> >
> >> >In hdfs-site.xml I have:
> >> >
> >> >
> >> >dfs.hosts.exclude
> >> >/opt/hadoop/hadoop-1.0.0/conf/exclude
> >> >
> >> >
> >> >Add to exclude file:
> >> >
> >> >host1
> >> >host2
> >> >
> >> >Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the
> >>two
> >> >nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
> >> >nothing in the Decommissioning Nodes list). If I look at the datanode
> >>logs
> >> >running on host1 or host2, I still see blocks being copied in and it
> >>does
> >> >not appear that any additional replication was happening.
> >> >
> >> >What am I missing during the decommission process?
> >> >
> >> >-Chris
> >>
> >>
>
>


Re: decommissioning datanodes

2012-06-08 Thread Chris Grier
Do you mean the file specified by the 'dfs.hosts' parameter? That is not
currently set in my configuration (the hosts are only specified in the
slaves file).

-Chris

On Fri, Jun 8, 2012 at 11:56 AM, Serge Blazhiyevskyy <
serge.blazhiyevs...@nice.com> wrote:

> Your nodes need to be in include and exclude file in the same time
>
>
> Do you use both files?
>
> On 6/8/12 11:46 AM, "Chris Grier"  wrote:
>
> >Hello,
> >
> >I'm in the trying to figure out how to decommission data nodes. Here's
> >what
> >I do:
> >
> >In hdfs-site.xml I have:
> >
> >
> >dfs.hosts.exclude
> >/opt/hadoop/hadoop-1.0.0/conf/exclude
> >
> >
> >Add to exclude file:
> >
> >host1
> >host2
> >
> >Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the two
> >nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
> >nothing in the Decommissioning Nodes list). If I look at the datanode logs
> >running on host1 or host2, I still see blocks being copied in and it does
> >not appear that any additional replication was happening.
> >
> >What am I missing during the decommission process?
> >
> >-Chris
>
>


decommissioning datanodes

2012-06-08 Thread Chris Grier
Hello,

I'm in the trying to figure out how to decommission data nodes. Here's what
I do:

In hdfs-site.xml I have:


dfs.hosts.exclude
/opt/hadoop/hadoop-1.0.0/conf/exclude


Add to exclude file:

host1
host2

Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the two
nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
nothing in the Decommissioning Nodes list). If I look at the datanode logs
running on host1 or host2, I still see blocks being copied in and it does
not appear that any additional replication was happening.

What am I missing during the decommission process?

-Chris