Is this helps?
https://stackoverflow.com/questions/42546501/the-retention-period-for-offset-topic-of-kafka/44277227#44277227
On Mon, Oct 1, 2018 at 2:31 PM Kaushik Nambiar
wrote:
> Hello,
> Any updates on the issue?
>
>
> Regards,
> Kaushik Nambiar
>
> On Wed, Sep 26, 2018, 12:37 PM Kaushik Na
Hello,
Any updates on the issue?
Regards,
Kaushik Nambiar
On Wed, Sep 26, 2018, 12:37 PM Kaushik Nambiar
wrote:
> Hello,
>
> I am using an SSL Kafka v 0.11.xx on a Linux operating system.
>
> I can see in the log files that the topic segments are getting deleted
> regularly.
> The concern I am
>>Offsets.retention.minutes (default is 7 days, not 24 hours).
In 0.11.x , default value was 24 hrs, it is changed to 7 days in
2.0[1]. Kaushik mentions that they are using 0.11.xx
1. http://kafka.apache.org/documentation/#upgrade_200_notable
On Mon, Sep 24, 2018 at 8:42 PM, Kaushik Nambiar
wrot
Hello,
I am using an SSL Kafka v 0.11.xx on a Linux operating system.
I can see in the log files that the topic segments are getting deleted
regularly.
The concern I am having is for the system topic which is __consumer_offset
, the segments are not getting deleted.
So that's contributing to a l
Hello,
Thankyou for your reply.
I have attached a text file containing the data within the
server.properties file.
Also I could see that it was the .log files within the __consumer_offset
topic that were sizing around 100 mb each.
So due to many such log files,the disk is getting maxed out.
Your
What are your settings for:
1) Offsets.retention.check.interval.ms
2) Offsets.retention.minutes (default is 7 days, not 24 hours).
Also, did this occur even after you restarted any individual brokers?
Please share the server.properties "As is" for your case.
Regards,
On Mon, 24 Sep 2018 at 12:1
Hello,
I am using a Kafka with version 0.11.xx.
When I check the logs I can see the index segments for user defined topics
are getting deleted.
But I cannot find the indices for the consumer_offset topic getting deleted.
That's causing around GBs of data getting accumulated in our persistent
disk.
Hello,
Please find the server.properties info.
The Kafka server is running on a Ubuntu 14.XX instance.
I have removed all the commented section.
So the data I post below would be the only properties we are using for our
server.
All the other properties are commented(so I guess the default values a
Hello Bret,
About the properties you mentioned earlier.
I couldn't find any of these properties in my server.properties file.
So we were assuming the default values would b in place.
So I guess the default values for the above mentioned properties are true
and 24 hours for Kafka 0.11.xx
Regards,
K
Hello Bret,
Thank you for your reply.
For one consumer offset topic,I can see many log segments.Each log segment
is around 100 mb.
So due to many such log segments we are experiencing such data issues.
Your views on this one
Regards,
Kaushik Nambiar
On Wed, Sep 19, 2018, 10:37 AM Brett Rann wrot
That's unusually large. Ours are around 32k-90mb each. Initially curious if
you
have log.cleaner.enable=true and what offsets.retention.minutes is set to.
And yes it can affect cluster performance. We had instances of consumer
outages
that were caused by bugged large consumer offfset files, especi
Odd that the log compaction isn't working. What OS is your broker running
on and can you please post your server.properties?
On Wed, 19 Sep. 2018, 2:13 am Kaushik Nambiar,
wrote:
> >
> > Hello,
> > We have a Kafka 0.11.xx version setup.
> > So the system topic which is __consumer_offset, we are
Hello,
We have a Kafka 0.11.xx version setup.
So the system topic which is __consumer_offset, we are looking at many such
topics like __consumer_offset-1,2,4.
So one topic in particular,is now having log segments which is contributing
to 5GB of data.
I had a look at our server.properties file b
Hello,
We have a Kafka 0.11.xx version setup.
So the system topic which is __consumer_offset, we are looking at many such
topics like __consumer_offset-1,2,4.
So one topic in particular,is now having log segments which is contributing
to 5GB of data.
I had a look at our server.properties file b
>
> Hello,
> We have a Kafka 0.11.xx version setup.
> So the system topic which is __consumer_offset, we are looking at many
> such topics like __consumer_offset-1,2,4.
> So one topic in particular,is now having log segments which is
> contributing to 5GB of data.
> I had a look at our server.p
Hi Users,
While using Apache Kafka with a Java API High Level Consumer, I get the
following error sometimes,
WARNpool-1-thread-4 Auto-commit of offsets
{my_topic-2=OffsetAndMetadata{offset=53847, metadata=''}} failed for group
my_consumer_group: Offset commit failed with a retriable exception. Yo
Hello Jakub,
Maybe you're interested in feature that's not yet available but is being
proposed and discussed (on dev ML) for future - see
https://cwiki.apache.org/confluence/display/KAFKA/KIP-122%3A+Add+Reset+Consumer+Group+Offsets+tooling
Kind regards,
Stevo Slavic.
On Tue, Mar 21, 2017 at 4:43
What offset do you want to reset them to? The easier way to adjust offsets
in 0.10 is to attach a consumer for the target topic-partition and seek to
the position you desire and commit that new offset.
On Tue, Mar 21, 2017 at 9:56 AM, Jakub Stransky
wrote:
> Hello,
>
> just recently migrated to
Hello,
just recently migrated to using Kafka 0.10.1.0
I would like to reset position for some consumers. I went through
documentation and couldn't spot it how to achieve that. All what I got is
that v 10 reduces usage of zookeeper and clients have possibility to use
different storage for maintain
I don't control consumer directly. I'm using apache Storm with kafka spout
to read topic.
On Thu, Feb 9, 2017 at 5:28 PM, Mahendra Kariya
wrote:
> You can use the seekToBeginning method of KafkaConsumer.
>
> https://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/consumer/
> KafkaConsumer
You can use the seekToBeginning method of KafkaConsumer.
https://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seekToBeginning(java.util.Collection)
On Thu, Feb 9, 2017 at 7:56 PM, Igor Kuzmenko wrote:
> Hello, I'm using new consumer to read kafka topic. For
Hello, I'm using new consumer to read kafka topic. For testing, I want to
read the same topic from the beggining multiple times, with same consumer.
Before restarting test, I want to delete consumer offsets, so consumer
start read from begining. Where can I find offsets?
han zookeeper.
>> You can use the --new-consumer option to check for kafka stored offsets.
>>
>> Best Jan
>>
>>
>> On 01.02.2017 21:14, Ara Ebrahimi wrote:
>>> Hi,
>>>
>>> For a subset of our topics we get this error:
>>>
&g
offsets.
Best Jan
On 01.02.2017 21:14, Ara Ebrahimi wrote:
Hi,
For a subset of our topics we get this error:
$KAFKA_HOME/bin/kafka-consumer-offset-checker.sh --group
argyle-streams --topic topic_name --zookeeper $ZOOKEEPERS
[2017-02-01 12:08:56,115] WARN WARNING: ConsumerOffsetChecker is
:
$KAFKA_HOME/bin/kafka-consumer-offset-checker.sh --group argyle-streams --topic
topic_name --zookeeper $ZOOKEEPERS
[2017-02-01 12:08:56,115] WARN WARNING: ConsumerOffsetChecker is deprecated and
will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead
Hi,
For a subset of our topics we get this error:
$KAFKA_HOME/bin/kafka-consumer-offset-checker.sh --group argyle-streams --topic
topic_name --zookeeper $ZOOKEEPERS
[2017-02-01 12:08:56,115] WARN WARNING: ConsumerOffsetChecker is deprecated and
will be dropped in releases following 0.9.0. Use
Producers were publishing data for the topic. And consumers were also
connected, sending heartbeat pings every 100 ms.
On Thu, 12 Jan 2017 at 17:15 Michael Freeman wrote:
> If the topic has not seen traffic for a while then Kafka will remove the
> stored offset. When your consumer reconnects K
If the topic has not seen traffic for a while then Kafka will remove the stored
offset. When your consumer reconnects Kafka no longer has the offset so it will
reprocess from earliest.
Michael
> On 12 Jan 2017, at 11:13, Mahendra Kariya wrote:
>
> Hey All,
>
> We have a Kafka cluster hosted
Hey All,
We have a Kafka cluster hosted on Google Cloud. There was some network
issue on the cloud and suddenly, the offset for a particular consumer group
got reset to earliest and all of a sudden the lag was in millions. We
aren't able to figure out what went wrong. Has anybody faced the
same/si
Yes, offsets are unique per partition.
> I've observed that for example I had offset values equal to zero more times
> then there is the number of Kafka partitions.
Can you elaborate a little more what you observed?
-Zakee
> On Nov 14, 2016, at 10:06 AM, Dominik Safaric
> wrote:
>
> Hi all,
Hi all,
I've been wondering- is the offset gotten with ConsumerRecord<>().offset()
always unique for each partition? Asking because while I've been running a
consumer group, I've observed that for example I had offset values equal to
zero more times then there is the number of Kafka partitions
t; > >> I am confused aas in why is it not connecting some other broker if
> > > > >> connection to this broker fails. Can you check if the broker is
> up?
> > > > >>
> > > > >> The way it works is the consumer will send a
> Consumer
t; > > >> one
> > > >> of the brokers and get the offsetmanager for its group and then
> > perform
> > > >> the
> > > >> offset management.
> > > >>
> > > >> Thanks,
> > > >>
> > >
nagement.
> > >>
> > >> Thanks,
> > >>
> > >> Mayuresh
> > >>
> > >> On Fri, May 8, 2015 at 9:22 AM, Meghana Narasimhan <
> > >> mnarasim...@bandwidth.com> wrote:
> > >>
> > >> > Hi,
Meghana Narasimhan <
> >> mnarasim...@bandwidth.com> wrote:
> >>
> >> > Hi,
> >> > I'm using the Kafka 8.2.1 version(kafka_2.11-0.8.2.1) and the consumer
> >> > offset checker hangs indefinitely and does not return any results. I
> &g
the Kafka 8.2.1 version(kafka_2.11-0.8.2.1) and the consumer
>> > offset checker hangs indefinitely and does not return any results. I
>> > enabled the debug for tools and below is the debug statements as seen on
>> > the stdout. Any thoughts or inputs on this will
mmand used :
bin/kafka-consumer-offset-checker.sh --zookeeper localhost:2181 --group
test-consumer-group
or
./kafka-consumer-offset-checker.sh --zookeeper
broker1:2181,broker2:2181,broker3:2181 --group test-consumer-group
DEBUG Querying X.X.X.X:9092 to locate offset manager for
test-consumer-
Kafka 8.2.1 version(kafka_2.11-0.8.2.1) and the consumer
> > offset checker hangs indefinitely and does not return any results. I
> > enabled the debug for tools and below is the debug statements as seen on
> > the stdout. Any thoughts or inputs on this will be much appreciated.
>
and does not return any results. I
> enabled the debug for tools and below is the debug statements as seen on
> the stdout. Any thoughts or inputs on this will be much appreciated.
>
> command used :
> bin/kafka-consumer-offset-checker.sh --zookeeper localhost:2181 --group
> test-c
mmand used :
bin/kafka-consumer-offset-checker.sh --zookeeper localhost:2181 --group
test-consumer-group
or
./kafka-consumer-offset-checker.sh --zookeeper
broker1:2181,broker2:2181,broker3:2181 --group test-consumer-group
DEBUG Querying X.X.X.X:9092 to locate offset manager for
test-consumer-
You can use it. Are you facing any problems using it?
On Thu, Apr 2, 2015 at 6:53 AM, Amreen Khan wrote:
> Is KafkaConsumerOffsetChecker still in development for 0.8.2?
>
> Amreen Khan
--
-Regards,
Mayuresh R. Gharat
(862) 250-7125
Is KafkaConsumerOffsetChecker still in development for 0.8.2?
Amreen Khan
42 matches
Mail list logo