[jira] [Created] (HDFS-3194) Continuous block scanning at DN side
Continuous block scanning at DN side Key: HDFS-3194 URL: https://issues.apache.org/jira/browse/HDFS-3194 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 1.0.3 Reporter: suja s Priority: Minor Fix For: 1.0.3 Block scanning interval by default should be taken as 21 days(3 weeks) and each block scanning should happen once in 21 days. Here the block is being scanned continuosly. 2012-04-03 10:44:47,056 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-241703115-xx.xx.xx.55-1333086229434:blk_-2666054955039014473_1003 2012-04-03 10:45:02,064 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-241703115-xx.xx.xx.55-1333086229434:blk_-2666054955039014473_1003 2012-04-03 10:45:17,071 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-241703115-xx.xx.xx.55-1333086229434:blk_-2666054955039014473_1003 2012-04-03 10:45:32,079 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-241703115-xx.xx.xx.55-1333086229434:blk_-2666054955039014473 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-3161) 20 Append: Excluded DN replica from recovery should be removed from DN.
20 Append: Excluded DN replica from recovery should be removed from DN. --- Key: HDFS-3161 URL: https://issues.apache.org/jira/browse/HDFS-3161 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 1.0.0 Reporter: suja s Priority: Critical Fix For: 1.0.3 1) DN1-DN2-DN3 are in pipeline. 2) Client killed abruptly 3) one DN has restarted , say DN3 4) In DN3 info.wasRecoveredOnStartup() will be true 5) NN recovery triggered, DN3 skipped from recovery due to above check. 6) Now DN1, DN2 has blocks with generataion stamp 2 and DN3 has older generation stamp say 1 and also DN3 still has this block entry in ongoingCreates 7) as part of recovery file has closed and got only two live replicas ( from DN1 and DN2) 8) So, NN issued the command for replication. Now DN3 also has the replica with newer generation stamp. 9) Now DN3 contains 2 replicas on disk. and one entry in ongoing creates with referring to blocksBeingWritten directory. When we call append/ leaseRecovery, it may again skip this node for that recovery as blockId entry still presents in ongoingCreates with startup recovery true. It may keep continue this dance for evry recovery. And this stale replica will not be cleaned untill we restart the cluster. Actual replica will be trasferred to this node only through replication process. Also unnecessarily that replicated blocks will get invalidated after next recoveries -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-3162) BlockMap's corruptNodes count and CorruptReplicas map count is not matching.
BlockMap's corruptNodes count and CorruptReplicas map count is not matching. Key: HDFS-3162 URL: https://issues.apache.org/jira/browse/HDFS-3162 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 1.0.0 Reporter: suja s Priority: Minor Fix For: 1.0.3 Even after invalidating the block, continuosly below log is coming Inconsistent number of corrupt replicas for blk_1332906029734_1719blockMap has 0 but corrupt replicas map has 1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira