[
https://issues.apache.org/jira/browse/KAFKA-79?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13084162#comment-13084162
]
Chris Burroughs commented on KAFKA-79:
--------------------------------------
- Have you done any performance comparisons?
- Do you think you could produce a diff (link to github is fine) that shows all
of the compression changes?
- My reading of CompressionCodec is that getCompressionCodec would have to be
edited to add a new codec (so it would not be user plugable). Do you think
thats something we should support (and in that case I guess a separate ticket)
> Introduce the compression feature in Kafka
> ------------------------------------------
>
> Key: KAFKA-79
> URL: https://issues.apache.org/jira/browse/KAFKA-79
> Project: Kafka
> Issue Type: New Feature
> Affects Versions: 0.6
> Reporter: Neha Narkhede
> Fix For: 0.7
>
>
> With this feature, we can enable end-to-end block compression in Kafka. The
> idea is to enable compression on the producer for some or all topics, write
> the data in compressed format on the server and make the consumers
> compression aware. The data will be decompressed only on the consumer side.
> Ideally, there should be a choice of compression codecs to be used by the
> producer. That means a change to the message header as well as the network
> byte format. On the consumer side, the state maintenance behavior of the
> zookeeper consumer changes. For compressed data, the consumed offset will be
> advanced one compressed message at a time. For uncompressed data, consumed
> offset will be advanced one message at a time.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira