The xml files have not been changed for more than two months, so that should
not be the reason.  Even the in_use.lock is more than a month old.  However,
we did shut it down few days ago and restarted it afterward.  Then the
second shutdown might not be clean.

On Tue, Apr 12, 2011 at 7:52 AM, Ayon Sinha <ayonsi...@yahoo.com> wrote:

> The datanode used the dfs config xml file to tell the datanode process,
> what disks are available for storage. Can you check that the config xml has
> all the partitions mentioned and has not been overwritten during the restore
> process?
>
> -Ayon
> See My Photos on Flickr <http://www.flickr.com/photos/ayonsinha/>
> Also check out my Blog for answers to commonly asked 
> questions.<http://dailyadvisor.blogspot.com>
>
>
> ------------------------------
> *From:* felix gao <gre1...@gmail.com>
> *To:* hdfs-user@hadoop.apache.org
> *Sent:* Tue, April 12, 2011 7:46:31 AM
> *Subject:* Question regarding datanode been wiped by hadoop
>
> What reason/condition would cause a datanode’s blocks to be removed?   Our
> cluster had a one of its datanodes crash because of bad RAM.   After the
> system was upgraded and the datanode/tasktracker brought online the next day
> we noticed the amount of space utilized was minimal and the cluster was
> rebalancing blocks to the datanode.   It would seem the prior blocks were
> removed.   Was this because the datanode was declared dead?   What is the
> criteria for a namenode to decide (Assuming its the namenode) when a
> datanode should remove prior blocks?
>

Reply via email to