Correct me if I m wrong. If compaction is used +1 to indicate next offset
is no longer valid. For the compacted section the offset is not increasing
sequentially. i think you need to call the next offset of the last
processed record to figure out what the next offset will be

On Wed, 29 Jul 2015 at 06:16 Stevo Slavić <ssla...@gmail.com> wrote:

> Hello Jason,
>
> Thanks for reply!
>
> About your proposal, in general case it might be helpful. In my case it
> will not help much - I'm allowing each ConsumerRecord or subset of
> ConsumerRecords to be processed and ACKed independently and out of HLC
> process/thread (not to block partition), and then committing largest
> consecutive ACKed processed offset (+1) since current last committed offset
> per partition.
>
> Kind regards,
> Stevo Slavic.
>
> On Mon, Jul 27, 2015 at 6:52 PM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
> > Hey Stevo,
> >
> > I agree that it's a little unintuitive that what you are committing is
> the
> > next offset that should be read from and not the one that has already
> been
> > read. We're probably constrained in that we already have a consumer which
> > implements this behavior. Would it help if we added a method on
> > ConsumerRecords to get the next offset (e.g. nextOffset(partition))?
> >
> > Thanks,
> > Jason
> >
> > On Fri, Jul 24, 2015 at 10:11 AM, Stevo Slavić <ssla...@gmail.com>
> wrote:
> >
> > > Hello Apache Kafka community,
> > >
> > > Say there is only one topic with single partition and a single message
> on
> > > it.
> > > Result of calling a poll with new consumer will return ConsumerRecord
> for
> > > that message and it will have offset of 0.
> > >
> > > After processing message, current KafkaConsumer implementation expects
> > one
> > > to commit not offset 0 as processed, but to commit offset 1 - next
> > > offset/position one would like to consume.
> > >
> > > Does this sound strange to you as well?
> > >
> > > Wondering couldn't this offset+1 handling for next position to read
> been
> > > done in one place, in KafkaConsumer implementation or broker or
> whatever,
> > > instead of every user of KafkaConsumer having to do it.
> > >
> > > Kind regards,
> > > Stevo Slavic.
> > >
> >
>

Reply via email to