[ 
https://issues.apache.org/jira/browse/KAFKA-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559422#comment-14559422
 ] 

Joel Koshy commented on KAFKA-2213:
-----------------------------------

Case A: Yes. i.e., if broker compression.type is 'producer' which is the 
default.
Case B: Yes I think we it would be good to handle this and write the messages 
out with the configured compression type. I'm not sure about batching - but I 
added it to the summary since in general one would want to batch a little 
before compressing. We could drop that though if people think that's weird. If 
you retain one out of hundred messages then the size reduction from compaction 
would likely be good enough to counter any increase due to inadequate 
compression.

> Log cleaner should write compacted messages using configured compression type
> -----------------------------------------------------------------------------
>
>                 Key: KAFKA-2213
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2213
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Joel Koshy
>
> In KAFKA-1374 the log cleaner was improved to handle compressed messages. 
> There were a couple of follow-ups from that:
> * We write compacted messages using the original compression type in the 
> compressed message-set. We should instead append all retained messages with 
> the configured broker compression type of the topic.
> * While compressing messages we should ideally do some batching before 
> compression.
> * Investigate the use of the client compressor. (See the discussion in the 
> RBs for KAFKA-1374)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to