A brief search brought me to related discussion on this JIRA:

https://issues.apache.org/jira/browse/KAFKA-3806?focusedCommentId=15906349&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15906349

FYI

On Fri, Oct 6, 2017 at 10:37 AM, Manikumar <manikumar.re...@gmail.com>
wrote:

> @Ted  Yes, I think we should add log warning message.
>
> On Fri, Oct 6, 2017 at 9:50 PM, Vincent Dautremont <
> vincent.dautrem...@olamobile.com.invalid> wrote:
>
> > is there a way to read messages on a topic partition from a specific node
> > we that we choose (and not by the topic partition leader) ?
> > I would like to read myself that each of the __consumer_offsets partition
> > replicas have the same consumer group offset written in it in it.
> >
> > On Fri, Oct 6, 2017 at 6:08 PM, Dmitriy Vsekhvalnov <
> > dvsekhval...@gmail.com>
> > wrote:
> >
> > > Stas:
> > >
> > > we rely on spring-kafka, it  commits offsets "manually" for us after
> > event
> > > handler completed. So it's kind of automatic once there is constant
> > stream
> > > of events (no idle time, which is true for us). Though it's not what
> pure
> > > kafka-client calls "automatic" (flush commits at fixed intervals).
> > >
> > > On Fri, Oct 6, 2017 at 7:04 PM, Stas Chizhov <schiz...@gmail.com>
> wrote:
> > >
> > > > You don't have autocmmit enables that means you commit offsets
> > yourself -
> > > > correct? If you store them per partition somewhere and fail to clean
> it
> > > up
> > > > upon rebalance next time the consumer gets this partition assigned
> > during
> > > > next rebalance it can commit old stale offset- can this be the case?
> > > >
> > > >
> > > > fre 6 okt. 2017 kl. 17:59 skrev Dmitriy Vsekhvalnov <
> > > > dvsekhval...@gmail.com
> > > > >:
> > > >
> > > > > Reprocessing same events again - is fine for us (idempotent). While
> > > > loosing
> > > > > data is more critical.
> > > > >
> > > > > What are reasons of such behaviour? Consumers are never idle,
> always
> > > > > commiting, probably something wrong with broker setup then?
> > > > >
> > > > > On Fri, Oct 6, 2017 at 6:58 PM, Ted Yu <yuzhih...@gmail.com>
> wrote:
> > > > >
> > > > > > Stas:
> > > > > > bq.  using anything but none is not really an option
> > > > > >
> > > > > > If you have time, can you explain a bit more ?
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > On Fri, Oct 6, 2017 at 8:55 AM, Stas Chizhov <schiz...@gmail.com
> >
> > > > wrote:
> > > > > >
> > > > > > > If you set auto.offset.reset to none next time it happens you
> > will
> > > be
> > > > > in
> > > > > > > much better position to find out what happens. Also in general
> > with
> > > > > > current
> > > > > > > semantics of offset reset policy IMO using anything but none is
> > not
> > > > > > really
> > > > > > > an option unless it is ok for consumer to loose some data
> > (latest)
> > > or
> > > > > > > reprocess it second time (earliest).
> > > > > > >
> > > > > > > fre 6 okt. 2017 kl. 17:44 skrev Ted Yu <yuzhih...@gmail.com>:
> > > > > > >
> > > > > > > > Should Kafka log warning if log.retention.hours is lower than
> > > > number
> > > > > of
> > > > > > > > hours specified by offsets.retention.minutes ?
> > > > > > > >
> > > > > > > > On Fri, Oct 6, 2017 at 8:35 AM, Manikumar <
> > > > manikumar.re...@gmail.com
> > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > normally, log.retention.hours (168hrs)  should be higher
> than
> > > > > > > > > offsets.retention.minutes (336 hrs)?
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Fri, Oct 6, 2017 at 8:58 PM, Dmitriy Vsekhvalnov <
> > > > > > > > > dvsekhval...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Ted,
> > > > > > > > > >
> > > > > > > > > > Broker: v0.11.0.0
> > > > > > > > > >
> > > > > > > > > > Consumer:
> > > > > > > > > > kafka-clients v0.11.0.0
> > > > > > > > > > auto.offset.reset = earliest
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Fri, Oct 6, 2017 at 6:24 PM, Ted Yu <
> > yuzhih...@gmail.com>
> > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > What's the value for auto.offset.reset  ?
> > > > > > > > > > >
> > > > > > > > > > > Which release are you using ?
> > > > > > > > > > >
> > > > > > > > > > > Cheers
> > > > > > > > > > >
> > > > > > > > > > > On Fri, Oct 6, 2017 at 7:52 AM, Dmitriy Vsekhvalnov <
> > > > > > > > > > > dvsekhval...@gmail.com>
> > > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Hi all,
> > > > > > > > > > > >
> > > > > > > > > > > > we several time faced situation where consumer-group
> > > > started
> > > > > to
> > > > > > > > > > > re-consume
> > > > > > > > > > > > old events from beginning. Here is scenario:
> > > > > > > > > > > >
> > > > > > > > > > > > 1. x3 broker kafka cluster on top of x3 node
> zookeeper
> > > > > > > > > > > > 2. RF=3 for all topics
> > > > > > > > > > > > 3. log.retention.hours=168 and
> > > > > offsets.retention.minutes=20160
> > > > > > > > > > > > 4. running sustainable load (pushing events)
> > > > > > > > > > > > 5. doing disaster testing by randomly shutting down 1
> > of
> > > 3
> > > > > > broker
> > > > > > > > > nodes
> > > > > > > > > > > > (then provision new broker back)
> > > > > > > > > > > >
> > > > > > > > > > > > Several times after bouncing broker we faced
> situation
> > > > where
> > > > > > > > consumer
> > > > > > > > > > > group
> > > > > > > > > > > > started to re-consume old events.
> > > > > > > > > > > >
> > > > > > > > > > > > consumer group:
> > > > > > > > > > > >
> > > > > > > > > > > > 1. enable.auto.commit = false
> > > > > > > > > > > > 2. tried graceful group shutdown, kill -9 and
> > terminating
> > > > AWS
> > > > > > > nodes
> > > > > > > > > > > > 3. never experienced re-consumption for given cases.
> > > > > > > > > > > >
> > > > > > > > > > > > What can cause that old events re-consumption? Is it
> > > > related
> > > > > to
> > > > > > > > > > bouncing
> > > > > > > > > > > > one of brokers? What to search in a logs? Any broker
> > > > settings
> > > > > > to
> > > > > > > > try?
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks in advance.
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> > --
> > The information transmitted is intended only for the person or entity to
> > which it is addressed and may contain confidential and/or privileged
> > material. Any review, retransmission, dissemination or other use of, or
> > taking of any action in reliance upon, this information by persons or
> > entities other than the intended recipient is prohibited. If you received
> > this in error, please contact the sender and delete the material from any
> > computer.
> >
>

Reply via email to