[
https://issues.apache.org/jira/browse/KAFKA-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17899099#comment-17899099
]
Chia-Ping Tsai commented on KAFKA-9613:
---------------------------------------
Additionally, we need more details about the timeout exception. Normally, it is
caused by the host running the script can't connect to the partition leader.
> CorruptRecordException: Found record size 0 smaller than minimum record
> overhead
> --------------------------------------------------------------------------------
>
> Key: KAFKA-9613
> URL: https://issues.apache.org/jira/browse/KAFKA-9613
> Project: Kafka
> Issue Type: Bug
> Components: core
> Affects Versions: 2.6.2
> Reporter: Amit Khandelwal
> Assignee: hudeqi
> Priority: Major
> Attachments: image-2024-11-13-14-02-45-768.png
>
>
> 20200224;21:01:38: [2020-02-24 21:01:38,615] ERROR [ReplicaManager broker=0]
> Error processing fetch with max size 1048576 from consumer on partition
> SANDBOX.BROKER.NEWORDER-0: (fetchOffset=211886, logStartOffset=-1,
> maxBytes=1048576, currentLeaderEpoch=Optional.empty)
> (kafka.server.ReplicaManager)
> 20200224;21:01:38: org.apache.kafka.common.errors.CorruptRecordException:
> Found record size 0 smaller than minimum record overhead (14) in file
> /data/tmp/kafka-topic-logs/SANDBOX.BROKER.NEWORDER-0/00000000000000000000.log.
> 20200224;21:05:48: [2020-02-24 21:05:48,711] INFO [GroupMetadataManager
> brokerId=0] Removed 0 expired offsets in 1 milliseconds.
> (kafka.coordinator.group.GroupMetadataManager)
> 20200224;21:10:22: [2020-02-24 21:10:22,204] INFO [GroupCoordinator 0]:
> Member
> xxxxxxxx_011-9e61d2c9-ce5a-4231-bda1-f04e6c260dc0-StreamThread-1-consumer-27768816-ee87-498f-8896-191912282d4f
> in group yyyyyyyyy_011 has failed, removing it from the group
> (kafka.coordinator.group.GroupCoordinator)
>
> [https://stackoverflow.com/questions/60404510/kafka-broker-issue-replica-manager-with-max-size#]
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)