[
https://issues.apache.org/jira/browse/HADOOP-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12629319#action_12629319
]
Tsz Wo (Nicholas), SZE commented on HADOOP-3981:
------------------------------------------------
How about we implement MD5-of-CRC32-every-512bytes-with-64Mblocks and use it as
the default file checksum algorithm for all FileSystem? Then, we don't have to
change FileSystem API at this moment.
A few issues:
- Should we return a list of MD5 (one per block, the length of the file
checksum will depend on the number of blocks) or a fixed length checksum (e.g.
MD5-of-MD5-of-CRC32)?
- If bytes.per.checksum is not 512 or block size is not 64MB in HDFS, how about
getFileChecksum(Path f) returns null?
- For other FS, how about we return null? We should implement a serial version
of MD5-of-CRC32-every-512bytes-with-64Mblocks algorithm later.
> Need a distributed file checksum algorithm for HDFS
> ---------------------------------------------------
>
> Key: HADOOP-3981
> URL: https://issues.apache.org/jira/browse/HADOOP-3981
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs
> Reporter: Tsz Wo (Nicholas), SZE
>
> Traditional message digest algorithms, like MD5, SHA1, etc., require reading
> the entire input message sequentially in a central location. HDFS supports
> large files with multiple tera bytes. The overhead of reading the entire
> file is huge. A distributed file checksum algorithm is needed for HDFS.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.