[ 
https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16100698#comment-16100698
 ] 

ASF GitHub Bot commented on KAFKA-5630:
---------------------------------------

GitHub user becketqin opened a pull request:

    https://github.com/apache/kafka/pull/3573

    KAFKA-5630; The consumer should block on courrupt records and keeping throw 
exception

    This patch handles the case that a CorruptRecordException is thrown from 
the iterator directly.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/becketqin/kafka KAFKA-5630

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/kafka/pull/3573.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #3573
    
----
commit e51ddce3ddacefee7101b95021a3af5da32bcff4
Author: Jiangjie Qin <becket....@gmail.com>
Date:   2017-07-25T18:20:09Z

    KAFKA-5630; The consumer should block on courrupt records and keeping throw 
exception.

----


> Consumer poll loop over the same record after a CorruptRecordException
> ----------------------------------------------------------------------
>
>                 Key: KAFKA-5630
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5630
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions: 0.11.0.0
>            Reporter: Vincent Maurin
>            Assignee: Jiangjie Qin
>            Priority: Critical
>              Labels: regression, reliability
>             Fix For: 0.11.0.1
>
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite 
> consumption loop of the same record, i.e, each call to poll is returning to 
> me 500 times one record (500 is my max.poll.records). I am using the java 
> client 0.11.0.0.
> Running the code with the debugger, the initial problem come from 
> `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record 
> size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test 
> block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the 
> last record.
> I guess the corruption problem is similar too 
> https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the 
> client is probably not the expected one



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to