[ 
https://issues.apache.org/jira/browse/KAFKA-2758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17013271#comment-17013271
 ] 

Guozhang Wang commented on KAFKA-2758:
--------------------------------------

We used to put it on hold especially for 1) since KIP-211 is not merged yet, 
however even now after KIP-211 is merged we should be careful since a newer 
versioned client may talk to an older versioned broker (2.0-) which does not 
have KIP-211 yet. We have some plans for automatically detecting broker 
versions so I'd suggest before that we do not pick up this ticket yet.

> Improve Offset Commit Behavior
> ------------------------------
>
>                 Key: KAFKA-2758
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2758
>             Project: Kafka
>          Issue Type: Improvement
>          Components: consumer
>            Reporter: Guozhang Wang
>            Priority: Major
>              Labels: newbie, reliability
>
> There are two scenarios of offset committing that we can improve:
> 1) we can filter the partitions whose committed offset is equal to the 
> consumed offset, meaning there is no new consumed messages from this 
> partition and hence we do not need to include this partition in the commit 
> request.
> 2) we can make a commit request right after resetting to a fetch / consume 
> position either according to the reset policy (e.g. on consumer starting up, 
> or handling of out of range offset, etc), or through the {code} seek {code} 
> so that if the consumer fails right after these event, upon recovery it can 
> restarts from the reset position instead of resetting again: this can lead 
> to, for example, data loss if we use "largest" as reset policy while there 
> are new messages coming to the fetching partitions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to