4-13 21:37:20,308] INFO [GroupCoordinator 6]: Stabilized group
trackers-etl generation 4987 (__consumer_offsets-29)
(kafka.coordinator.group.GroupCoordinator)
Has anyone seen this before? Should I file a JIRA ticket? Was there a
better process than restarting the broker?
--
James Brown
Systems Engineer
flight recorder profile.
>
> Ismael
>
>
> On Fri, Dec 27, 2019, 7:07 PM James Brown wrote:
>
> > I just upgraded one of our test clusters from 2.3.1 to 2.4.0 and the
> system
> > CPU usage very noticeably increased (from approximately 35% of a CPU to
> > ap
constant or
worse when we upgrade something with more load on it?
--
James Brown
Engineer
oker 6 and as soon as it came up, it assumed leadership of the
partition and everything started working fine.
Has anyone else seen this behavior before? The fact that a partition was
unavailable but the mbean showed 0 under-replicated and 0 un-available
topics is extremely concerning to me.
--
James Brown
Systems Engineer
For what it's worth, shutting down the entire cluster and then restarting
it did address this issue.
I'd love anyone's thoughts on what the "correct" fix would be here.
On Fri, Apr 28, 2017 at 10:58 AM, James Brown wrote:
> The following is also appearing in the l
The following is also appearing in the logs a lot, if anyone has any ideas:
INFO Partition [easypost.syslog,7] on broker 1: Cached zkVersion [647] not
equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
On Fri, Apr 28, 2017 at 10:43 AM, James Brown wrote:
> We're
to only have whatever the third broker was in
their replica-set as their replica set? Do I need to temporarily enable
unclean elections?
I've never seen a cluster fail this way...
--
James Brown
Engineer
Jeff: This was with 0.9.0.1. It has not recurred since upgrading to
0.10.1.0.
On Fri, Oct 28, 2016 at 9:28 PM, Jeff Widman wrote:
> James,
> What version did you experience the problem with?
>
> On Oct 28, 2016 6:26 PM, "James Brown" wrote:
>
> > I was hav
protocol_type='consumer',
protocol='',
members=[(member_id='tracking.etl-3c81b0e8-7683-474a-ab85-d809392db6ed',
client_id='tracking.etl', client_host='/fd00:ea51:d057:0:1:0:0:2',
member_metadata=b'', member_assignment=b'')])])
On
et to see if that fixes it; I
figured I'd e-mail out just in case there was anything else folks wanted me
to look at first.
--
James Brown
Engineer
m wondering if you could be hitting
> https://issues.apache.org/jira/browse/KAFKA-3802 ? If not, is there a way
> to reproduce this reliably?
>
> Jun
>
> On Mon, Oct 31, 2016 at 4:14 PM, James Brown wrote:
>
> > I just finished upgrading our main production cluster to 0.10
Incidentally, I'd like to note that this did *not* occur in my testing
environment (which didn't expire any unexpected segments after upgrading),
so if it is a feature, it's certainly a hit-or-miss one.
On Mon, Oct 31, 2016 at 4:14 PM, James Brown wrote:
> I just finished
n. (kafka.log.Log)
I suspect it's too late to un-do anything related to this, and I don't
actually think any of our consumers were relying on this data, but I
figured I'd send along this report and see if anybody else has seen
behavior like this.
Thanks,
--
James Brown
Engineer
e following entries:
>
> log.cleaner.enable = true
>
> offsets.retention.minutes = 1440
>
>
> I tried looking through the issues on JIRA but didn't see a reported
> issue. Does anyone know what's going on, and how I can fix this?
>
> Thanks.
>
--
James Brown
Engineer
partition (at
>> least for a couple of hours as I monitor it).
>> Zookeeper cluster is healthy.
>>
>> ls /brokers/ids
>> [104224874, 104224875, 104224863, 104224864, 104224871, 104224867,
>> 104224868, 104224865, 104224866, 104224876, 104224877, 104224869,
>> 104224878, 104224879]
>>
>> That broker is not registered in ZK.
>>
--
James Brown
Engineer
15 matches
Mail list logo