OK. Thanks Cody!
On Fri, Apr 29, 2016 at 12:41 PM, Cody Koeninger wrote:
> If worker to broker communication breaks down, the worker will sleep
> for refresh.leader.backoff.ms before throwing an error, at which point
> normal spark task retry (spark.task.maxFailures) comes
If worker to broker communication breaks down, the worker will sleep
for refresh.leader.backoff.ms before throwing an error, at which point
normal spark task retry (spark.task.maxFailures) comes into play.
If driver to broker communication breaks down, the driver will sleep
for
OK. So the Kafka side they use rebalance.backoff.ms of 2000 which is a
default for rebalancing and they say that refresh.leader.backoff.ms of 200
to refresh leader is very aggressive and suggested us to increase it to
2000. Even after increasing to 2500 I still get Leader Lost Errors.
Is
OK. So the Kafka side they use rebalance.backoff.ms of 2000 which is a
default for rebalancing and they say that refresh.leader.backoff.ms of 200
to refresh leader is very aggressive and suggested us to increase it to
2000. Even after increasing to 2500 I still get Leader Lost Errors.
Is
Seems like it'd be better to look into the Kafka side of things to
determine why you're losing leaders frequently, as opposed to trying
to put a bandaid on it.
On Wed, Apr 27, 2016 at 11:49 AM, SRK wrote:
> Hi,
>
> We seem to be getting a lot of LeaderLostExceptions
Hi,
We seem to be getting a lot of LeaderLostExceptions and our source Stream is
working with a default value of rebalance.backoff.ms which is 2000. I was
thinking to increase this value to 5000. Any suggestions on this?
Thanks!
--
View this message in context: