Any timeline on an official 0.8.2.1 release? Were there any issues found
with rc2? Just checking in because we are anxious to update our brokers but
waiting for the patch release. Thanks.
On Thu, Mar 5, 2015 at 12:01 AM, Neha Narkhede n...@confluent.io wrote:
+1. Verified quick start, unit
+1
at 4:40:31 AM Solon Gordon so...@knewton.com wrote:
Thanks for the fast response. I did a quick test and initial results
look
promising. When I swapped in the patched version, CPU usage dropped
from
~150% to ~65%. Still a bit higher than what I see with 0.8.1.1 but much
more
would recommend people hold off on 0.8.2 upgrades until we have a
handle
on this.
-Jay
On Fri, Feb 13, 2015 at 1:47 PM, Solon Gordon so...@knewton.com wrote:
The partitions nearly all have replication factor 2 (a few stray ones
have
1), and our producers use
I just wanted to bump this issue to see if anyone has thoughts. Based on
the error message it seems like the broker is attempting to consume nearly
2GB of data in a single fetch. Is this expected behavior?
Please let us know if more details would be helpful or if it would be
better for us to file
to support really large messages and increase these values,
you may run into OOM issues.
Gwen
On Wed, Dec 10, 2014 at 7:48 AM, Solon Gordon so...@knewton.com wrote:
I just wanted to bump this issue to see if anyone has thoughts. Based on
the error message it seems like the broker is attempting
, Solon Gordon so...@knewton.com wrote:
I just wanted to bump this issue to see if anyone has thoughts. Based on
the error message it seems like the broker is attempting to consume
nearly
2GB of data in a single fetch. Is this expected behavior?
Please let us know if more details would
and
controlled shutdown. Would you mind trying out 0.8.2-beta?
On Fri, Nov 7, 2014 at 11:52 AM, Solon Gordon so...@knewton.com wrote:
We're using 0.8.1.1 with auto.leader.rebalance.enable=true.
On Fri, Nov 7, 2014 at 2:35 PM, Guozhang Wang wangg...@gmail.com
wrote:
Solon,
Which
Hi all,
My team has observed that if a broker process is killed in the middle of
the controlled shutdown procedure, the remaining brokers start spewing
errors and do not properly rebalance leadership. The cluster cannot recover
without major manual intervention.
Here is how to reproduce the
We're using 0.8.1.1 with auto.leader.rebalance.enable=true.
On Fri, Nov 7, 2014 at 2:35 PM, Guozhang Wang wangg...@gmail.com wrote:
Solon,
Which version of Kafka are you running and are you enabling auto leader
rebalance at the same time?
Guozhang
On Fri, Nov 7, 2014 at 8:41 AM, Solon
Hi all,
I've been investigating how Kafka 0.8.1.1 responds to the scenario where
one broker loses connectivity (due to something like a hardware issue or
network partition.) It looks like the brokers themselves adjust within a
few seconds to reassign leaders and shrink ISRs. However, I see
-connect to it refreshing
metadata. Could you file a JIRA for this?
Guozhang
On Tue, Nov 4, 2014 at 10:43 AM, Solon Gordon so...@knewton.com wrote:
Hi all,
I've been investigating how Kafka 0.8.1.1 responds to the scenario where
one broker loses connectivity (due to something
12 matches
Mail list logo