normally, log.retention.hours (168hrs)  should be higher than
offsets.retention.minutes (336 hrs)?


On Fri, Oct 6, 2017 at 8:58 PM, Dmitriy Vsekhvalnov <dvsekhval...@gmail.com>
wrote:

> Hi Ted,
>
> Broker: v0.11.0.0
>
> Consumer:
> kafka-clients v0.11.0.0
> auto.offset.reset = earliest
>
>
>
> On Fri, Oct 6, 2017 at 6:24 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > What's the value for auto.offset.reset  ?
> >
> > Which release are you using ?
> >
> > Cheers
> >
> > On Fri, Oct 6, 2017 at 7:52 AM, Dmitriy Vsekhvalnov <
> > dvsekhval...@gmail.com>
> > wrote:
> >
> > > Hi all,
> > >
> > > we several time faced situation where consumer-group started to
> > re-consume
> > > old events from beginning. Here is scenario:
> > >
> > > 1. x3 broker kafka cluster on top of x3 node zookeeper
> > > 2. RF=3 for all topics
> > > 3. log.retention.hours=168 and offsets.retention.minutes=20160
> > > 4. running sustainable load (pushing events)
> > > 5. doing disaster testing by randomly shutting down 1 of 3 broker nodes
> > > (then provision new broker back)
> > >
> > > Several times after bouncing broker we faced situation where consumer
> > group
> > > started to re-consume old events.
> > >
> > > consumer group:
> > >
> > > 1. enable.auto.commit = false
> > > 2. tried graceful group shutdown, kill -9 and terminating AWS nodes
> > > 3. never experienced re-consumption for given cases.
> > >
> > > What can cause that old events re-consumption? Is it related to
> bouncing
> > > one of brokers? What to search in a logs? Any broker settings to try?
> > >
> > > Thanks in advance.
> > >
> >
>

Reply via email to