Mike,
The endless rebalance errors occur due the error that Mayuresh just pasted.
The rebalance attempts fail because of the conflict in the zkNode.
Below is the exact trace.
*2014-12-09 13:22:11 k.u.ZkUtils$ [INFO] I wrote this conflicted ephemeral
node
Hi guys,
At HubSpot we think the issue is related to slow consumers. During a
rebalance, one of the first things the consumer does is signal a shutdown
to the fetcher [1] [2], in order to relinquish ownership of the partitions.
This waits for the shutdown of all shutdown fetcher threads, which
No, we don't normally see conflicts. We'll see endless attempts to
rebalance.
-Mike
On Thu, Mar 26, 2015 at 5:15 PM, Mayuresh Gharat gharatmayures...@gmail.com
wrote:
Did you see something like this in any of the consumer logs :
Conflict in ….. data : ……. stored data :……” ?
Thanks,
Can you share a reproducible test case?
On Tue, Dec 9, 2014 at 7:11 AM, Mohit Kathuria mkathu...@sprinklr.com
wrote:
Neha,
The same issue reoccured with just 2 consumer processes. The exception was
related to conflict in writing the ephemeral node. Below was the exception.
Topic name is
Any suggestions what might be going on here. We are very much blinded here
and our application is getting effected due to this.
-Mohit
On Tue, Dec 9, 2014 at 8:41 PM, Mohit Kathuria mkathu...@sprinklr.com
wrote:
Neha,
The same issue reoccured with just 2 consumer processes. The exception
Neha,
The same issue reoccured with just 2 consumer processes. The exception was
related to conflict in writing the ephemeral node. Below was the exception.
Topic name is
lst_plugin_com.spr.listening.plugin.impl.plugins.SemantriaEnrichmentPlugin
with 30 partitions. The 2 processes were running
Hi Mohit Kathuria,
We are facing the same issue. We are using same versions of Kafka and ZK.
Did you figure out what was happening?
Thanks,
* http://grokbase.com/user/Mohit-Kathuria/PJUXxkD8QjsC1Fj1WFrbJg*
--
*Mario Lazaro* | Software Engineer, Big Data
*GumGum* http://www.gumgum.com/ |
A rebalance should trigger on all consumers when you add a new consumer to
the group. If you don't see the zookeeper watch fire, the consumer may have
somehow lost the watch. We have seen this behavior on older zk versions, I
wonder it that bug got reintroduced. To verify if this is the case, you
Hi all,
Can someone help here. We are getting constant rebalance failure each time
a consumer is added beyond a certain number. Did quite a lot of debugging
on this and still not able to figure out the pattern.
-Thanks,
Mohit
On Mon, Nov 3, 2014 at 10:53 PM, Mohit Kathuria
Neha,
Looks like an issue with the consumer rebalance not able to complete
successfully. We were able to reproduce the issue on topic with 30
partitions, 3 consumer processes(p1,p2 and p3), properties - 40
rebalance.max.retries and 1(10s) rebalance.backoff.ms.
Before the process p3 was
Neha,
In my last reply, the subject got changed thats why it got marked as new
message on
http://mail-archives.apache.org/mod_mbox/kafka-users/201411.mbox/date.
Please ignore that. Below text is the reply in continuation to
Dear Experts,
We recently updated to kafka v0.8.1.1 with zookeeper v3.4.5. I have of
topic with 30 partitions and 2 replicas. We are using High level consumer
api.
Each consumer process which is a storm topolofy has 5 streams which
connects to 1 or more partitions. We are not using storm's
Mohit,
I wonder if it is related to
https://issues.apache.org/jira/browse/KAFKA-1585. When zookeeper expires a
session, it doesn't delete the ephemeral nodes immediately. So if you end
up trying to recreate ephemeral nodes quickly, it could either be in the
valid latest session or from the
13 matches
Mail list logo