[ 
https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16098263#comment-16098263
 ] 

Vincent Maurin commented on KAFKA-5630:
---------------------------------------

[~ijuma] thank you for your feedback. Regarding consumer, I have test with 
version 0.10.2.1 and it is actually throwing the error if calling "poll". Then 
it sounds fair enough to skip the record with seek. But with 0.11, I don't get 
any error, a call to poll just returns the same record duplicated 
max.poll.record. The logic then to seek for the next offsets is more 
complicated than reacting to the exception, it sounds for me that I have to 
compare records returned by poll and advance my offset if they are all equals ? 
Or am I misusing the client ? (It is a manual assigned partition use case, 
without committing offsets to kafka, I have tried to follow the recommendations 
in the KafkaConsumer javadoc for that)

> Consumer poll loop over the same record after a CorruptRecordException
> ----------------------------------------------------------------------
>
>                 Key: KAFKA-5630
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5630
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions: 0.11.0.0
>            Reporter: Vincent Maurin
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite 
> consumption loop of the same record, i.e, each call to poll is returning to 
> me 500 times one record (500 is my max.poll.records). I am using the java 
> client 0.11.0.0.
> Running the code with the debugger, the initial problem come from 
> `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record 
> size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test 
> block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the 
> last record.
> I guess the corruption problem is similar too 
> https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the 
> client is probably not the expected one



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to