unsubscribe

*Eric Ho | Qventus*
Awarded Top Innovation in Cost Savings
<http://www.healthcare-informatics.com/article/analytics/2017-healthcare-informatics-innovator-awards-innovation-cost-savings-winner>

On Fri, Oct 6, 2017 at 7:52 AM, Dmitriy Vsekhvalnov <dvsekhval...@gmail.com>
wrote:

> Hi all,
>
> we several time faced situation where consumer-group started to re-consume
> old events from beginning. Here is scenario:
>
> 1. x3 broker kafka cluster on top of x3 node zookeeper
> 2. RF=3 for all topics
> 3. log.retention.hours=168 and offsets.retention.minutes=20160
> 4. running sustainable load (pushing events)
> 5. doing disaster testing by randomly shutting down 1 of 3 broker nodes
> (then provision new broker back)
>
> Several times after bouncing broker we faced situation where consumer group
> started to re-consume old events.
>
> consumer group:
>
> 1. enable.auto.commit = false
> 2. tried graceful group shutdown, kill -9 and terminating AWS nodes
> 3. never experienced re-consumption for given cases.
>
> What can cause that old events re-consumption? Is it related to bouncing
> one of brokers? What to search in a logs? Any broker settings to try?
>
> Thanks in advance.
>

Reply via email to