"From what I understand, there's currently no way to prevent this type of
shuffling of partitions from worker to worker while the consumers are under
maintenance. I'm also not sure if this an issue I don't need to worry
about."
If you don't want rebalance, consumers can also manually subscribe to
If you don't want or need automated rebalancing or partition reassignment
amongst clients then you could always just have each worker/client subscribe
directly to individual partitions using consumer.assign() rather than
consumer.subscribe(). That way when client 1 is restarted the data in its
What I mean by "flapping" in this context is unnecessary rebalancing
happening. The example I would give is what a Hadoop Datanode would do in
case of a shutdown. By default, it will wait 10 minutes before replicating
the blocks owned by the Datanode so routine maintenance wouldn't cause
Not sure I understand your question about flapping. The LeaveGroupRequest
is only sent on a graceful shutdown. If a consumer knows it is going to
shutdown, it is good to proactively make sure the group knows it needs to
rebalance work because some of the partitions that were handled by the
The coordinator will immediately move the group into a rebalance if it
needs it. The reason LeaveGroupRequest was added was to avoid having to
wait for the session timeout before completing a rebalance. So aside from
the latency of cleanup/committing offests/rejoining after a heartbeat,
rolling
Hi Kafka folks!
When a consumer is closed, it will issue a LeaveGroupRequest. Does anyone
know how long the coordinator waits before reassigning the partitions that
were assigned to the leaving consumer to a new consumer? I ask because I'm
trying to understand the behavior of consumers if you're