Upgrading the broker to version 0.11.0.1 has fixed the problem.
Thanks.
e.org
Subject: Re: Incorrect consumer offsets after broker restart 0.11.0.0
It is a known bug, fixed in 0.11.0.1
On Oct 10, 2017 15:20, "Phil Luckhurst" wrote:
> We have a Kafka broker we use for testing that we have recently
> updated from 0.9.0.1 to 0.11.0.0 and our java consumer
s.consumer.internals.Fetcher: Resetting offset for
partition ta-eng-cob1-ayla-0 to the committed offset 1828015
There don't appear to be any errors in the broker logs to indicate a problem,
so the question is what is making the broker return the incorrect offset when
it is restarted?
Thanks,
Phil Luckhurst
e only one meta data
> refresh. I have seen issues like this only for a short time when
> leader went down. Continuous occurrence of this issue i have seen only
> once where leader went down and probably there was no ISR.
>
> Vinay
>
> On Thu, Apr 28, 2016 at 9:06 AM, Phil Luc
Hi Vinay,
This statement is very interesting.
"I noticed that in case where a consumer is marked dead or a rebalance is in
progress, kafka throws CommitFailedException. A KafkaException is thrown only
when something unknown has happened which is not yet categorized."
I will test this out but w
publish
a message within the first 5 minutes then I also saw a metadata request every
100ms.
Regards,
Phil Luckhurst
-Original Message-
From: Fumo, Vincent [mailto:vincent_f...@cable.comcast.com]
Sent: 27 April 2016 20:48
To: users@kafka.apache.org
Subject: Re: Metadata Request Loo
ta request does not
perform the heartbeat but the ones that follow that do. This means in our case
we call commitSync often enough that the metadata request does not cause us an
issue.
Thanks,
Phil Luckhurst
-Original Message-
From: vinay sharma [mailto:vinsharma.t...@gmail.com]
Sent: 2
at it is in the
same process.
I guess a consumer rebalance will also trigger a metadata refresh but what else
might?
Thanks
Phil Luckhurst
-Original Message-
From: vinay sharma [mailto:vinsharma.t...@gmail.com]
Sent: 26 April 2016 13:24
To: users@kafka.apache.org
Subject: RE: Detecting
it 'rebalance in progress' type
exception rather than just a KafkaException would allow this to be easily
identified and handled.
The information about the metadata request is useful, I'll watch out for that
if we change our commit logic.
Thanks
Phil Luckhurst
-
ut KafkaException
could be called for other reasons. A KafkaRebalanceException or even a method
we could call on the consumer would allow us to safely abort the current
processing loop knowing that the remaining messages would be picked up by
another consumer after the rebalance - that would stop us processing dup
Thanks for all the responses. Unfortunately it seems that currently there is no
fool proof solution to this. It's not a problem with the stored offsets as it
will happen even if I do a commitSync after each record is processed. It's the
unprocessed records in the batch that get processed twice.
rRebalanceListener onPartitionsRevoked() was called then I
would call resume and break out of the record processing loop so the main
poll() request is called again.
4. Call resume at the end of the record processing loop.
Is that a viable solution to the problem or is there a better way to do
> that we can figure out if more needs to be done?
> >
> > Ismael
> >
> > On Tue, Apr 12, 2016 at 2:52 PM, Ismael Juma wrote:
> >
> > > Note that this should be fixed as part of
> > > https://issues.apache.org/jira/browse/KAFKA-3306
6 14:08
>> To: users@kafka.apache.org
>> Subject: Re: KafkaProducer 0.9.0.1 continually sends metadata
>> requests
>>
>> Phil,
>> In our case this bug placed significant load on our brokers. We
>> raised a bug https://issues.apache.org/jira/browse/KAF
laced significant load on our brokers. We raised a bug
https://issues.apache.org/jira/browse/KAFKA-3358 to get this resolved.
On Tue, Apr 12, 2016 at 5:39 AM Phil Luckhurst
wrote:
> With debug logging turned on we've sometimes seen our logs filling up
> with the kafka producer sending m
thy of a fix but I thought I'd post it here in case
someone else hits the same problem.
Regards,
Phil Luckhurst
16 matches
Mail list logo