[ https://issues.apache.org/jira/browse/HDFS-4544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Suresh Srinivas updated HDFS-4544: ---------------------------------- Resolution: Fixed Fix Version/s: (was: 3.0.0) 2.0.4-beta Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I committed the patch to trunk, branch-2 and branch-1. Thank you Arpit. Thank you Amareshwari for diagnosing the issue and creating the bug. > Error in deleting blocks should not do check disk, for all types of errors > -------------------------------------------------------------------------- > > Key: HDFS-4544 > URL: https://issues.apache.org/jira/browse/HDFS-4544 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 1.1.1, 2.0.3-alpha > Reporter: Amareshwari Sriramadasu > Assignee: Arpit Agarwal > Fix For: 1.2.0, 2.0.4-beta > > Attachments: HDFS-4544.branch-1.1.patch, HDFS-4544.patch, > HDFS-4544.trunk.1.patch > > > The following code in Datanode.java > {noformat} > try { > if (blockScanner != null) { > blockScanner.deleteBlocks(toDelete); > } > data.invalidate(toDelete); > } catch(IOException e) { > checkDiskError(); > throw e; > } > {noformat} > causes check disk to happen in case of any errors during invalidate. > We have seen errors like : > 2013-03-02 00:08:28,849 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Unexpected error trying to delete block blk_-2973118207682441648_225738165. > BlockInfo not found in volumeMap. > And all such errors trigger check disk, making the clients timeout. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira