[ 
https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-546.
---------------------------

       Resolution: Fixed
    Fix Version/s: 0.8

Thanks for patch v5. +1. Committed to 0.8.
                
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>             Fix For: 0.8
>
>         Attachments: kafka-546-v1.patch, kafka-546-v2.patch, 
> kafka-546-v3.patch, kafka-546-v4.patch, kafka-546-v5.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in 
> the file. Because it wasn't possible to directly decompress from the middle 
> of a compressed block, messages inside a compressed message set effectively 
> had no offset. As a result the offset given to the consumer was always the 
> offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have 
> offsets. However the server still needs to fetch from the beginning of the 
> compressed messageset (otherwise it can't be decompressed). As a result a 
> commit() which occurs in the middle of a message set will still result in 
> some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than 
> the fetch offset rather than giving them to the consumer. This will make 
> commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to