Hi All,
Thanks for the excellent suggestion , once we find out the blocks which
have replication factor more than actual datanodes and we run setrep =
datanodes .
Should the decomissiong proceed smoothly . From your past experiences how
much time does a decommissioning take to complete . During
Hi,
which version HDFS you used?
On Wed, Mar 26, 2014 at 3:17 PM, Bharath Kumar bharath...@gmail.com wrote:
Hi All,
I am a novice hadoop user . I tried removing a node from my cluster of 2
nodes by adding the ip in excludes file and running dfsadmin -refreshNodes
command . But
Hi Brahma,
It might be some files have replication factor more than the actual number
of datanodes, so namenode will not be able to decommission the machine
because it cannot get the replica count settled.
Run the following command to check the replication factor of the files on
the hdfs, see if
OK, restarting all services now fsck shows under-replication. Was it the
NameNode restart?
John
From: John Lilley [mailto:john.lil...@redpoint.net]
Sent: Tuesday, March 04, 2014 5:47 AM
To: user@hadoop.apache.org
Subject: decommissioning a node
Our cluster has a node that reboot randomly. So