[ 
https://issues.apache.org/jira/browse/HDFS-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13603514#comment-13603514
 ] 

Kihwal Lee commented on HDFS-4605:
----------------------------------

If we do this, there will be a restriction that the summing hash function 
should consume 32 bits (CRC32 or CRC32C) at a time.
                
> Implement block-size independent file checksum
> ----------------------------------------------
>
>                 Key: HDFS-4605
>                 URL: https://issues.apache.org/jira/browse/HDFS-4605
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, hdfs-client
>    Affects Versions: 3.0.0
>            Reporter: Kihwal Lee
>
> The value of current getFileChecksum() is block-size dependent. Since 
> FileChecksum is mainly intended for comparing content of files, removing this 
> dependency will make FileCheckum in HDFS relevant in more use cases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to