Stas:
bq.  using anything but none is not really an option

If you have time, can you explain a bit more ?

Thanks

On Fri, Oct 6, 2017 at 8:55 AM, Stas Chizhov <schiz...@gmail.com> wrote:

> If you set auto.offset.reset to none next time it happens you will be in
> much better position to find out what happens. Also in general with current
> semantics of offset reset policy IMO using anything but none is not really
> an option unless it is ok for consumer to loose some data (latest) or
> reprocess it second time (earliest).
>
> fre 6 okt. 2017 kl. 17:44 skrev Ted Yu <yuzhih...@gmail.com>:
>
> > Should Kafka log warning if log.retention.hours is lower than number of
> > hours specified by offsets.retention.minutes ?
> >
> > On Fri, Oct 6, 2017 at 8:35 AM, Manikumar <manikumar.re...@gmail.com>
> > wrote:
> >
> > > normally, log.retention.hours (168hrs)  should be higher than
> > > offsets.retention.minutes (336 hrs)?
> > >
> > >
> > > On Fri, Oct 6, 2017 at 8:58 PM, Dmitriy Vsekhvalnov <
> > > dvsekhval...@gmail.com>
> > > wrote:
> > >
> > > > Hi Ted,
> > > >
> > > > Broker: v0.11.0.0
> > > >
> > > > Consumer:
> > > > kafka-clients v0.11.0.0
> > > > auto.offset.reset = earliest
> > > >
> > > >
> > > >
> > > > On Fri, Oct 6, 2017 at 6:24 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> > > >
> > > > > What's the value for auto.offset.reset  ?
> > > > >
> > > > > Which release are you using ?
> > > > >
> > > > > Cheers
> > > > >
> > > > > On Fri, Oct 6, 2017 at 7:52 AM, Dmitriy Vsekhvalnov <
> > > > > dvsekhval...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > we several time faced situation where consumer-group started to
> > > > > re-consume
> > > > > > old events from beginning. Here is scenario:
> > > > > >
> > > > > > 1. x3 broker kafka cluster on top of x3 node zookeeper
> > > > > > 2. RF=3 for all topics
> > > > > > 3. log.retention.hours=168 and offsets.retention.minutes=20160
> > > > > > 4. running sustainable load (pushing events)
> > > > > > 5. doing disaster testing by randomly shutting down 1 of 3 broker
> > > nodes
> > > > > > (then provision new broker back)
> > > > > >
> > > > > > Several times after bouncing broker we faced situation where
> > consumer
> > > > > group
> > > > > > started to re-consume old events.
> > > > > >
> > > > > > consumer group:
> > > > > >
> > > > > > 1. enable.auto.commit = false
> > > > > > 2. tried graceful group shutdown, kill -9 and terminating AWS
> nodes
> > > > > > 3. never experienced re-consumption for given cases.
> > > > > >
> > > > > > What can cause that old events re-consumption? Is it related to
> > > > bouncing
> > > > > > one of brokers? What to search in a logs? Any broker settings to
> > try?
> > > > > >
> > > > > > Thanks in advance.
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to