[ http://issues.apache.org/jira/browse/HADOOP-738?page=comments#action_12451449 ] Doug Cutting commented on HADOOP-738: -------------------------------------
For the particular problem cited, wouldn't the fix be to remove the old .crc file before writing the new file? CRC files are useful outside of DFS. Throwing them away means the data is no longer checked for end-to-end corruption. > dfs get or copyToLocal should not copy crc file > ----------------------------------------------- > > Key: HADOOP-738 > URL: http://issues.apache.org/jira/browse/HADOOP-738 > Project: Hadoop > Issue Type: Bug > Components: dfs > Affects Versions: 0.8.0 > Environment: all > Reporter: Milind Bhandarkar > Assigned To: Milind Bhandarkar > Fix For: 0.9.0 > > Attachments: hadoop-crc.patch > > > Currently, when we -get or -copyToLocal a directory from DFS, all the files > including crc files are also copied. When we -put or -copyFromLocal again, > since the crc files already exist on DFS, this put fails. The solution is not > to copy checksum files when copying to local. Patch is forthcoming. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
