The datanode used the dfs config xml file to tell the datanode process, what 
disks are available for storage. Can you check that the config xml has all the 
partitions mentioned and has not been overwritten during the restore process?
 -Ayon
See My Photos on Flickr
Also check out my Blog for answers to commonly asked questions.





________________________________
From: felix gao <gre1...@gmail.com>
To: hdfs-user@hadoop.apache.org
Sent: Tue, April 12, 2011 7:46:31 AM
Subject: Question regarding datanode been wiped by hadoop

  
What reason/condition would cause a datanode’s blocks to be removed?   Our 
cluster had a one of its datanodes crash because of bad RAM.   After the system 
was upgraded and the datanode/tasktracker brought online the next day we 
noticed 
the amount of space utilized was minimal and the cluster was rebalancing blocks 
to the datanode.   It would seem the prior blocks were removed.   Was this 
because the datanode was declared dead?   What is the criteria for a namenode 
to 
decide (Assuming its the namenode) when a datanode should remove prior blocks?  

Reply via email to