Datanode can have more than one copy of same block when a failed disk is coming back in datanode ------------------------------------------------------------------------------------------------
Key: HDFS-1940 URL: https://issues.apache.org/jira/browse/HDFS-1940 Project: Hadoop HDFS Issue Type: Bug Components: data-node Affects Versions: 0.20.204.0 Reporter: Rajit There is a situation where one datanode can have more than one copy of same block due to a disk fails and comes back after sometime in a datanode. And these duplicate blocks are not getting deleted even after datanode and namenode restart. This situation can only happen in a corner case , when due to disk failure, the data block is replicated to other disk of the same datanode. To simulate this scenario I copied a datablock and the associated .meta file from one disk to another disk of same datanode, so the datanode is having 2 copy of same replica. Now I restarted datanode and namenode. Still the extra data block and meta file is not deleted from the datanode [hdfs@gsbl90192 rajsaha]$ ls -l `find /grid/{0,1,2,3}/hadoop/var/hdfs/data/current -name blk_*` -rw-r--r-- 1 hdfs users 7814 May 13 21:05 /grid/1/hadoop/var/hdfs/data/current/blk_1727421609840461376 -rw-r--r-- 1 hdfs users 71 May 13 21:05 /grid/1/hadoop/var/hdfs/data/current/blk_1727421609840461376_579992.meta -rw-r--r-- 1 hdfs users 7814 May 13 21:14 /grid/3/hadoop/var/hdfs/data/current/blk_1727421609840461376 -rw-r--r-- 1 hdfs users 71 May 13 21:14 /grid/3/hadoop/var/hdfs/data/current/blk_1727421609840461376_579992.meta -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira