[ https://issues.apache.org/jira/browse/MAPREDUCE-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ramkumar Vadali updated MAPREDUCE-1908: --------------------------------------- Attachment: MAPREDUCE-1908.2.patch Modified test to corrupt two blocks in the same stripe and ensure failure. The test found an additional issue - need to disable caching to force the use of DFS. > DistributedRaidFileSystem does not handle ChecksumException correctly > --------------------------------------------------------------------- > > Key: MAPREDUCE-1908 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-1908 > Project: Hadoop Map/Reduce > Issue Type: Bug > Reporter: Ramkumar Vadali > Assignee: Ramkumar Vadali > Attachments: MAPREDUCE-1908.2.patch, MAPREDUCE-1908.patch > > > ChecksumException reports the offset of corruption within a block, > whereas DistributedRaidFileSystem.setAlternateLocations was expecting it > to report the offset of corruption within the file. > The best way of dealing with a missing block/corrupt block is to just > use the current seek offset in the file as the position of corruption. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.