Re: [kafka-clients] Re: [VOTE] 0.8.2.1 Candidate 2

2015-03-09 Thread Solon Gordon
Any timeline on an official 0.8.2.1 release? Were there any issues found with rc2? Just checking in because we are anxious to update our brokers but waiting for the patch release. Thanks. On Thu, Mar 5, 2015 at 12:01 AM, Neha Narkhede n...@confluent.io wrote: +1. Verified quick start, unit

Re: [VOTE] 0.8.2.1 Candidate 2

2015-03-02 Thread Solon Gordon
+1

Re: Increased CPU usage with 0.8.2-beta

2015-02-16 Thread Solon Gordon
at 4:40:31 AM Solon Gordon so...@knewton.com wrote: Thanks for the fast response. I did a quick test and initial results look promising. When I swapped in the patched version, CPU usage dropped from ~150% to ~65%. Still a bit higher than what I see with 0.8.1.1 but much more

Re: Increased CPU usage with 0.8.2-beta

2015-02-13 Thread Solon Gordon
would recommend people hold off on 0.8.2 upgrades until we have a handle on this. -Jay On Fri, Feb 13, 2015 at 1:47 PM, Solon Gordon so...@knewton.com wrote: The partitions nearly all have replication factor 2 (a few stray ones have 1), and our producers use

Re: OutOfMemoryException when starting replacement node.

2014-12-10 Thread Solon Gordon
I just wanted to bump this issue to see if anyone has thoughts. Based on the error message it seems like the broker is attempting to consume nearly 2GB of data in a single fetch. Is this expected behavior? Please let us know if more details would be helpful or if it would be better for us to file

Re: OutOfMemoryException when starting replacement node.

2014-12-10 Thread Solon Gordon
to support really large messages and increase these values, you may run into OOM issues. Gwen On Wed, Dec 10, 2014 at 7:48 AM, Solon Gordon so...@knewton.com wrote: I just wanted to bump this issue to see if anyone has thoughts. Based on the error message it seems like the broker is attempting

Re: OutOfMemoryException when starting replacement node.

2014-12-10 Thread Solon Gordon
, Solon Gordon so...@knewton.com wrote: I just wanted to bump this issue to see if anyone has thoughts. Based on the error message it seems like the broker is attempting to consume nearly 2GB of data in a single fetch. Is this expected behavior? Please let us know if more details would

Re: Interrupting controlled shutdown breaks Kafka cluster

2014-11-10 Thread Solon Gordon
and controlled shutdown. Would you mind trying out 0.8.2-beta? On Fri, Nov 7, 2014 at 11:52 AM, Solon Gordon so...@knewton.com wrote: We're using 0.8.1.1 with auto.leader.rebalance.enable=true. On Fri, Nov 7, 2014 at 2:35 PM, Guozhang Wang wangg...@gmail.com wrote: Solon, Which

Interrupting controlled shutdown breaks Kafka cluster

2014-11-07 Thread Solon Gordon
Hi all, My team has observed that if a broker process is killed in the middle of the controlled shutdown procedure, the remaining brokers start spewing errors and do not properly rebalance leadership. The cluster cannot recover without major manual intervention. Here is how to reproduce the

Re: Interrupting controlled shutdown breaks Kafka cluster

2014-11-07 Thread Solon Gordon
We're using 0.8.1.1 with auto.leader.rebalance.enable=true. On Fri, Nov 7, 2014 at 2:35 PM, Guozhang Wang wangg...@gmail.com wrote: Solon, Which version of Kafka are you running and are you enabling auto leader rebalance at the same time? Guozhang On Fri, Nov 7, 2014 at 8:41 AM, Solon

Producer timeout setting not respected

2014-11-04 Thread Solon Gordon
Hi all, I've been investigating how Kafka 0.8.1.1 responds to the scenario where one broker loses connectivity (due to something like a hardware issue or network partition.) It looks like the brokers themselves adjust within a few seconds to reassign leaders and shrink ISRs. However, I see

Re: Producer timeout setting not respected

2014-11-04 Thread Solon Gordon
-connect to it refreshing metadata. Could you file a JIRA for this? Guozhang On Tue, Nov 4, 2014 at 10:43 AM, Solon Gordon so...@knewton.com wrote: Hi all, I've been investigating how Kafka 0.8.1.1 responds to the scenario where one broker loses connectivity (due to something