The committed offset is actually the next message to consume, not the last
message consumed. So that sounds like expected behavior to me. The consumer
code handles this internally, but if you write code to commit offsets
manually, it can be a gotcha.

-Dana

On Mon, Feb 1, 2016 at 1:35 PM, Adam Kunicki <a...@streamsets.com> wrote:

> Hi,
>
> I've been noticing that a restarted consumer in 0.9 will start consuming
> from the last committed offset (inclusive). This means that any restarted
> consumer will get the last read (and committed) message causing a duplicate
> each time the consumer is restarted from the same position if there have
> been no new messages.
>
> Per:
>
> http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
> <
> https://mailtrack.io/trace/link/9853c5856f2b5862212148c1a969575c970a3dcc?url=http%3A%2F%2Fwww.confluent.io%2Fblog%2Ftutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client&signature=63a1a40b88347844
> >
> this seems like that is the intended behavior.
>
> Can anyone confirm this? If this is the case how are we expected to handle
> these duplicated messages?
>
> -Adam
>

Reply via email to