[ 
https://issues.apache.org/jira/browse/KAFKA-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lucia Cerchie updated KAFKA-15418:
----------------------------------
    Description: 
The docs [state|#L139-L140]]: 


 "Kafka supports this with an efficient batching format. A batch of messages 
can be clumped together compressed and sent to the server in this form. This 
batch of messages will be written in compressed form and will
   remain compressed in the log and will only be decompressed by the consumer."


However, brokers always perform some amount of batch decompression in order to 
validate data. Internally, the LogValidator class implements the validation 
logic, and message decompression ultimately happens in the relevant 
CompressionType's wrapForInput implementation. These lines need updating to 
reflect that, including the scenarios that require full payload decompression 
on the broker (compacted topic, or topic level compression codec different from 
producer's codec). 

 

 

  was:
The docs 
[state|[https://github.com/apache/kafka/blob/0912ca27e2a229d2ebe02f4d1dabc40ed5fab0bb/docs/design.html#L139-L140]]:
 
 Kafka supports this with an efficient batching format. A batch of messages can 
be clumped together compressed and sent to the server in this form. This batch 
of messages will be written in compressed form and will
    remain compressed in the log and will only be decompressed by the consumer.
However, brokers always perform some amount of batch decompression in order to 
validate data. Internally, the LogValidator class implements the validation 
logic, and message decompression ultimately happens in the relevant 
CompressionType's wrapForInput implementation. These lines need updating to 
reflect that, including the scenarios that require full payload decompression 
on the broker (compacted topic, or topic level compression codec different from 
producer's codec). 

 

 


> Update statement on decompression location 
> -------------------------------------------
>
>                 Key: KAFKA-15418
>                 URL: https://issues.apache.org/jira/browse/KAFKA-15418
>             Project: Kafka
>          Issue Type: Improvement
>          Components: docs
>            Reporter: Lucia Cerchie
>            Priority: Minor
>              Labels: docs
>             Fix For: 3.5.1
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The docs [state|#L139-L140]]: 
>  "Kafka supports this with an efficient batching format. A batch of messages 
> can be clumped together compressed and sent to the server in this form. This 
> batch of messages will be written in compressed form and will
>    remain compressed in the log and will only be decompressed by the 
> consumer."
> However, brokers always perform some amount of batch decompression in order 
> to validate data. Internally, the LogValidator class implements the 
> validation logic, and message decompression ultimately happens in the 
> relevant CompressionType's wrapForInput implementation. These lines need 
> updating to reflect that, including the scenarios that require full payload 
> decompression on the broker (compacted topic, or topic level compression 
> codec different from producer's codec). 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to