[ 
https://issues.apache.org/jira/browse/KAFKA-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChenLin updated KAFKA-8722:
---------------------------
    Attachment: image-2019-07-27-14-50-08-128.png

> In some cases, the crc check does not cause dirty data to be written.
> ---------------------------------------------------------------------
>
>                 Key: KAFKA-8722
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8722
>             Project: Kafka
>          Issue Type: Improvement
>          Components: log
>    Affects Versions: 0.10.2.2
>            Reporter: ChenLin
>            Priority: Major
>             Fix For: 0.10.2.2
>
>         Attachments: image-2019-07-27-14-50-08-128.png
>
>
> In our production environment, when we consume kafka's topic data in an 
> operating program, we found an error:
> org.apache.kafka.common.KafkaException: Record for partition 
> rl_dqn_debug_example-49 at offset 2911287689 is invalid, cause: Record is 
> corrupt (stored crc = 3580880396, computed crc = 1701403171)
> By looking at the code, I found that in some cases kafka would not verify the 
> data and write it to disk, so we fixed it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to