[
https://issues.apache.org/jira/browse/KAFKA-9909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17093757#comment-17093757
]
Guozhang Wang commented on KAFKA-9909:
--------------------------------------
Not sure I can follow here: if you set the config to Long.MAX_VALUE and did not
commit manually, then streams should NOT commit anything until it was closed,
or when a rebalance is triggered.
What did you mean by "skip specific offsets intentionally"? If you could not
process certain messages because it is e.g. ill-formatted, or is simply a
poison pill, the general solution here is to send it to some poison pill queue
for book-keeping.
> Kafka Streams : offset control to Streams API
> ---------------------------------------------
>
> Key: KAFKA-9909
> URL: https://issues.apache.org/jira/browse/KAFKA-9909
> Project: Kafka
> Issue Type: Improvement
> Components: streams
> Affects Versions: 2.5.0
> Environment: All
> Reporter: Gopikrishna
> Priority: Minor
> Labels: Offset, commit
>
> Hello team, really inspired the way streams api is running today. I would
> like to have a feature to be flexible regarding the offset. when we write the
> processor api, processor context object can be used to commit the offset.
> this is not effective. but streams are controlling the offset. the moment the
> process method executed or scheduled window completed, the offset is
> committed automatically by streams internally.
> Like traditional kafka consumer, its better the context object should have
> complete control over the offset whether to commit or not. This will give
> more control to the api to handle failovers and especially when message
> cannot be processed, context should not commit the offset. Appreciate this
> can be implemented.
>
> h4. enable.auto.commit is by default false, but streams are committing
> automatically the offset.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)