[ 
https://issues.apache.org/jira/browse/KAFKA-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13402363#comment-13402363
 ] 

Jay Kreps commented on KAFKA-374:
---------------------------------

Yeah, that's a little unclear. The "java" column is the "pure jvm" scala 
implementation in the patch and the native version is java.util.zip.CRC32 which 
uses JNI to call a C CRC library.
                
> Move to java CRC32 implementation
> ---------------------------------
>
>                 Key: KAFKA-374
>                 URL: https://issues.apache.org/jira/browse/KAFKA-374
>             Project: Kafka
>          Issue Type: New Feature
>          Components: core
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Priority: Minor
>              Labels: newbie
>         Attachments: KAFKA-374-draft.patch
>
>
> We keep a per-record crc32. This is fairly cheap algorithm, but the java 
> implementation uses JNI and it seems to be a bit expensive for small records. 
> I have seen this before in Kafka profiles, and I noticed it on another 
> application I was working on. Basically with small records the native 
> implementation can only checksum < 100MB/sec. Hadoop has done some analysis 
> of this and replaced it with a Java implementation that is 2x faster for 
> large values and 5-10x faster for small values. Details are here HADOOP-6148.
> We should do a quick read/write benchmark on log and message set iteration 
> and see if this improves things.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to