Hey Alyssa,
If one of those datanodes down, a few minutes will pass when master discover
this phenomenon. Master node takes those nodes which have not send heatbeat
for quite a while as dead ones.

On Thu, Jan 22, 2009 at 8:34 AM, Hargraves, Alyssa <aly...@wpi.edu> wrote:

> Hello Hadoop Users,
>
> I was hoping someone would be able to answer a question about node
> decommissioning.  I have a test Hadoop cluster set up which only consists of
> my computer and a master node.  I am looking at the removal and addition of
> nodes.  Adding a node is nearly instant (only about 5 seconds), but removing
> a node by decommissioning it takes a while, and I don't understand why.
> Currently, the systems are running no map/reduce tasks and storing no data.
> DFS Health reports:
>
> 7 files and directories, 0 blocks = 7 total. Heap Size is 6.68 MB / 992.31
> MB (0%)
> Capacity        :       298.02 GB
> DFS Remaining   :       245.79 GB
> DFS Used        :       4 KB
> DFS Used%       :       0 %
> Live Nodes      :       2
> Dead Nodes      :       0
>
> Node     Last Contact    Admin State     Size (GB)       Used (%)
>  Used (%)        Remaining (GB)          Blocks
> master  0       In Service      149.01  0
>        122.22  0
> slave   82      Decommission In Progress        149.01  0
>        123.58  0
>
> However, even with nothing stored and nothing running, the decommission
> process takes 3 to 5 minutes, and I'm not quite sure why. There isn't any
> data to move anywhere, and there aren't any jobs to worry about.  I am using
> 0.18.2.
>
> Thank you for any help in solving this,
> Alyssa Hargraves




-- 
My research interests are distributed systems, parallel computing and
bytecode based virtual machine.

http://coderplay.javaeye.com

Reply via email to