[ 
https://issues.apache.org/jira/browse/HBASE-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988697#comment-14988697
 ] 

Apekshit Sharma commented on HBASE-14738:
-----------------------------------------

In testAllChecksumTypes(), should we also check for the checksum type to make 
sure it's not using default checksum every time. I see no other test in that 
file testing the same so might as well do it here.
For everything else, LGTM.

> Backport HBASE-11927 (Use Native Hadoop Library for HFile checksum) to 0.98
> ---------------------------------------------------------------------------
>
>                 Key: HBASE-14738
>                 URL: https://issues.apache.org/jira/browse/HBASE-14738
>             Project: HBase
>          Issue Type: Task
>            Reporter: Andrew Purtell
>            Assignee: Andrew Purtell
>             Fix For: 0.98.16
>
>         Attachments: HBASE-14738-0.98.patch
>
>
> Profiling 0.98.15 I see 20-30% of CPU time spent in Hadoop's PureJavaCrc32. 
> Not surprising given previous results described on HBASE-11927. Backport.
> There are two issues with the backport:
> # The patch on 11927 changes the default CRC type from CRC32 to CRC32C. 
> Although the changes are backwards compatible -files with either CRC type 
> will be handled correctly in a transparent manner - we should probably leave 
> the default alone in 0.98 and advise users on a site configuration change to 
> use CRC32C if desired, for potential hardware acceleration.
> # Need a shim for differences between Hadoop's DataChecksum type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to