[ https://issues.apache.org/jira/browse/HDFS-4544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Arpit Agarwal updated HDFS-4544: -------------------------------- Affects Version/s: 2.0.3-alpha > Error in deleting blocks should not do check disk, for all types of errors > -------------------------------------------------------------------------- > > Key: HDFS-4544 > URL: https://issues.apache.org/jira/browse/HDFS-4544 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 1.1.1, 2.0.3-alpha > Reporter: Amareshwari Sriramadasu > Assignee: Arpit Agarwal > Fix For: 1.2.0 > > Attachments: HDFS-4544.branch-1.1.patch, HDFS-4544.patch, > HDFS-4544.trunk.1.patch > > > The following code in Datanode.java > {noformat} > try { > if (blockScanner != null) { > blockScanner.deleteBlocks(toDelete); > } > data.invalidate(toDelete); > } catch(IOException e) { > checkDiskError(); > throw e; > } > {noformat} > causes check disk to happen in case of any errors during invalidate. > We have seen errors like : > 2013-03-02 00:08:28,849 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Unexpected error trying to delete block blk_-2973118207682441648_225738165. > BlockInfo not found in volumeMap. > And all such errors trigger check disk, making the clients timeout. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira