[ 
https://issues.apache.org/jira/browse/HADOOP-6372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12778185#action_12778185
 ] 

dhruba borthakur commented on HADOOP-6372:
------------------------------------------

Does this mean that HDFS data blocks that were checksummed using this algorithm 
is now unread-able with this bug fix? How likely is this possibility?

> MurmurHash does not yield the same results as the reference C++ 
> implementation when size % 4 >= 2
> -------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-6372
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6372
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: util
>    Affects Versions: 0.20.1
>            Reporter: olivier gillet
>            Priority: Trivial
>         Attachments: HADOOP-6372.patch, murmur.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Last rounds of MurmurHash are done in reverse order. data[length - 3], 
> data[length - 2] and data[length - 1] in the block processing the remaining 
> bytes should be data[len_m +2], data[len_m + 1], data[len_m].

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to