[ 
https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16510050#comment-16510050
 ] 

Eugen Feller commented on KAFKA-5630:
-------------------------------------

Looks like we are running into a similar issue usingĀ 0.10.2.1 broker and kafka 
streams client 0.11.0.1. Wonder if this fix helps only if broker is also on 
0.11.01? This is the my relatedĀ JIRA 
(https://issues.apache.org/jira/browse/KAFKA-6977)

Thanks.

> Consumer poll loop over the same record after a CorruptRecordException
> ----------------------------------------------------------------------
>
>                 Key: KAFKA-5630
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5630
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions: 0.11.0.0
>            Reporter: Vincent Maurin
>            Assignee: Jiangjie Qin
>            Priority: Critical
>              Labels: regression, reliability
>             Fix For: 0.11.0.1, 1.0.0
>
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite 
> consumption loop of the same record, i.e, each call to poll is returning to 
> me 500 times one record (500 is my max.poll.records). I am using the java 
> client 0.11.0.0.
> Running the code with the debugger, the initial problem come from 
> `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record 
> size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test 
> block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the 
> last record.
> I guess the corruption problem is similar too 
> https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the 
> client is probably not the expected one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to