[ https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16098154#comment-16098154 ]
Vincent Maurin commented on KAFKA-5630: --------------------------------------- It is ``` offset: 210648 position: 172156054 CreateTime: 1499416798791 isvalid: true size: 610 magic: 1 compresscodec: NONE crc: 1846714374 offset: 210649 position: 172156664 CreateTime: 1499416798796 isvalid: true size: 586 magic: 1 compresscodec: NONE crc: 3995473502 offset: 210650 position: 172157250 CreateTime: 1499416798798 isvalid: true size: 641 magic: 1 compresscodec: NONE crc: 2352501239 Exception in thread "main" org.apache.kafka.common.errors.CorruptRecordException: Record size is smaller than minimum record overhead (14). ``` > Consumer poll loop over the same record after a CorruptRecordException > ---------------------------------------------------------------------- > > Key: KAFKA-5630 > URL: https://issues.apache.org/jira/browse/KAFKA-5630 > Project: Kafka > Issue Type: Bug > Components: consumer > Affects Versions: 0.11.0.0 > Reporter: Vincent Maurin > > Hello > While consuming a topic with log compaction enabled, I am getting an infinite > consumption loop of the same record, i.e, each call to poll is returning to > me 500 times one record (500 is my max.poll.records). I am using the java > client 0.11.0.0. > Running the code with the debugger, the initial problem come from > `Fetcher.PartitionRecords,fetchRecords()`. > Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record > size is less than the minimum record overhead (14)` > Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test > block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the > last record. > I guess the corruption problem is similar too > https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the > client is probably not the expected one -- This message was sent by Atlassian JIRA (v6.4.14#64029)