[ 
https://issues.apache.org/jira/browse/HADOOP-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12629322#action_12629322
 ] 

Doug Cutting commented on HADOOP-3981:
--------------------------------------

> implement MD5-of-CRC32-every-512bytes-with-64Mblocks and use it as the 
> default file checksum algorithm for all FileSystem?

We should just implement this for HDFS, where CRCs already exist.

> Should we return a list of MD5 [ ... ] ?

No, just a single checksum for the entire file.

> If bytes.per.checksum is not 512 or block size is not 64MB in HDFS, how about 
> getFileChecksum(Path f) returns null?

No, it should return a different algorithm string, with the file's 
bytes.per.checksum and block size.  I now think returning null by default is 
probably best, rather than having a default implementation that uses file 
length, since we should check file lengths explicitly and only compare 
checksums when lengths differ.

> For other FS, how about we return null?

Yes, I agree.




> Need a distributed file checksum algorithm for HDFS
> ---------------------------------------------------
>
>                 Key: HADOOP-3981
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3981
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Tsz Wo (Nicholas), SZE
>
> Traditional message digest algorithms, like MD5, SHA1, etc., require reading 
> the entire input message sequentially in a central location.  HDFS supports 
> large files with multiple tera bytes.  The overhead of reading the entire 
> file is huge. A distributed file checksum algorithm is needed for HDFS.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to