Hi Brahma,
It might be some files have replication factor more than the actual number
of datanodes, so namenode will not be able to decommission the machine
because it cannot get the replica count settled.

Run the following command to check the replication factor of the files on
the hdfs, see if any file has replication factor more than the actual
number of datanodes.

sudo -u hdfs hdfs fsck / -files -blocks | grep -B 1 repl=



On Wed, Mar 26, 2014 at 3:49 PM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

>  can you please elaborate more..?
>
>
>
> Like how many nodes are there in cluster and what's the replication factor
> for files..?
>
>
>
> Normally decommission will be success once all the replica's from the
> excluded node is replicated another node in  cluster(another node should be
> availble,,)..
>
>
>
>
>
>
>
>
>
> Thanks & Regards
>
>
>
> Brahma Reddy Battula
>
>
>   ------------------------------
> *From:* Bharath Kumar [bharath...@gmail.com]
> *Sent:* Wednesday, March 26, 2014 12:47 PM
> *To:* user@hadoop.apache.org
> *Subject:* Decommissioning a node takes forever
>
>
>  Hi All,
>
>  I am a novice hadoop user . I tried removing a node from my cluster of 2
> nodes by adding the ip in excludes file and running dfsadmin -refreshNodes
> command . But decommissioning takes a very long time I left it over the
> weekend and still it was not complete.
>
>  Your inputs will help
>
>
> --
> Warm Regards,
>                          *Bharath*
>
>
>



-- 
Cheers
-MJ

Reply via email to