[
https://issues.apache.org/jira/browse/KAFKA-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17899092#comment-17899092
]
Divij Vaidya edited comment on KAFKA-9613 at 11/18/24 9:34 AM:
---------------------------------------------------------------
> I tried to delete records to skip the error log on the error disk. But I got
> a timeout error. I don't know why it's a timeout. If there is a slight
> offset, it won't time out. We can't delete the wrong records (the log messed
> up), but retention can.
This doesn't sounds like the correct behaviour. Theoretically, DeleteRecord API
will move the StartOffset beyond the corrupted log and hence, a user should be
able to recover from athe case of a corrupted record by just moving the start
offset ahead of it.
Could you add the server side logs here when you received timeout for
DeleteRecord?
was (Author: divijv):
> I tried to delete records to skip the error log on the error disk. But I got
> a timeout error. I don't know why it's a timeout. If there is a slight
> offset, it won't time out. We can't delete the wrong records (the log messed
> up), but retention can.
This doesn't sounds like the correct behaviour. Theoretically, DeleteRecord API
will move the StartOffset beyond the corrupted log and hence, a user should be
able to recover from a corrupted record.
Could you add the server side logs here when you received timeout for
DeleteRecord?
> CorruptRecordException: Found record size 0 smaller than minimum record
> overhead
> --------------------------------------------------------------------------------
>
> Key: KAFKA-9613
> URL: https://issues.apache.org/jira/browse/KAFKA-9613
> Project: Kafka
> Issue Type: Bug
> Components: core
> Affects Versions: 2.6.2
> Reporter: Amit Khandelwal
> Assignee: hudeqi
> Priority: Major
> Attachments: image-2024-11-13-14-02-45-768.png
>
>
> 20200224;21:01:38: [2020-02-24 21:01:38,615] ERROR [ReplicaManager broker=0]
> Error processing fetch with max size 1048576 from consumer on partition
> SANDBOX.BROKER.NEWORDER-0: (fetchOffset=211886, logStartOffset=-1,
> maxBytes=1048576, currentLeaderEpoch=Optional.empty)
> (kafka.server.ReplicaManager)
> 20200224;21:01:38: org.apache.kafka.common.errors.CorruptRecordException:
> Found record size 0 smaller than minimum record overhead (14) in file
> /data/tmp/kafka-topic-logs/SANDBOX.BROKER.NEWORDER-0/00000000000000000000.log.
> 20200224;21:05:48: [2020-02-24 21:05:48,711] INFO [GroupMetadataManager
> brokerId=0] Removed 0 expired offsets in 1 milliseconds.
> (kafka.coordinator.group.GroupMetadataManager)
> 20200224;21:10:22: [2020-02-24 21:10:22,204] INFO [GroupCoordinator 0]:
> Member
> xxxxxxxx_011-9e61d2c9-ce5a-4231-bda1-f04e6c260dc0-StreamThread-1-consumer-27768816-ee87-498f-8896-191912282d4f
> in group yyyyyyyyy_011 has failed, removing it from the group
> (kafka.coordinator.group.GroupCoordinator)
>
> [https://stackoverflow.com/questions/60404510/kafka-broker-issue-replica-manager-with-max-size#]
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)