hows_elk_Group generation 13 (__consumer_offsets-3) with 1 members
(kafka.coordinator.group.GroupCoordinator)
[data-plane-kafka-request-handler-3]
2021-07-22 10:14:33,985 INFO [GroupCoordinator 1]: Assignment received from
leader for group shows_elk_Group for generation 13. The group has 1
members, 0 of which arestatic. (kafka.coordinator.group.GroupCoordinator)
[data-plane-kafka-request-handler-2]
```
I don't know why my consumer groups are rebalancing. Please help me to
resolve this issue. And let me know if you require any further information.
Thanks and regards
Shreyas
31057 - Silea (TV) - ITALY
phone: +39 0422 1836521
l.rov...@reply.it
www.reply.it
-Messaggio originale-
Da: mangat rai
Inviato: 6 May, 2021 11:51 AM
A: users@kafka.apache.org
Oggetto: Re: kafka-consumer-groups option
Hey Lorenzo Rovere,
Consider the case where you want to reprocess all the
Hey Lorenzo Rovere,
Consider the case where you want to reprocess all the data. Let's say your
process had a bug. You fixed it and now you want to reprocess everything to
produce the correct output.
Similarly, there can be other use cases for resetting the consumer offsets
and reprocessing the in
Hi,
I'm playing with the kafka-consumer-groups.sh command.
I wanted to ask the utility of the --to-current option used to reset offsets of
a consumer group to current offset. The thing I don't understand is in which
scenario I would want to use this option. If I'm already at the current offset,
That seems to be a bug indeed, I will reply on the ticket.
Guozhang
On Thu, Sep 24, 2020 at 8:03 PM Fu, Tony wrote:
> Is anyone seeing this problem as well (
> https://issues.apache.org/jira/browse/KAFKA-10513)? I think it also
> happens when new topics created within the subscription pattern
Is anyone seeing this problem as well
(https://issues.apache.org/jira/browse/KAFKA-10513)? I think it also happens
when new topics created within the subscription pattern.
Tony
Hi Ryanne/Josh,
I'm working on active-active mirror maker and while translating consumer
offset from source- cluster A to dest cluster B. any pointer would be helpful .
Cluster A
Cluster Name--A
Topic name: testA
Consumer group name: mm-testA-consumer
Cluster -B
Cluster Name--B
Topic name: sou
:3:BAZ can be
> processed in any order?
>
> I don’t think there is a way to do that with topics.
> -Dave
>
>
> From: Andre Mermegas
> Reply-To: "users@kafka.apache.org"
> Date: Wednesday, September 2, 2020 at 4:06 PM
> To: "users@kafka.apache.org"
er 2, 2020 at 4:06 PM
To: "users@kafka.apache.org"
Subject: [External] key and subkey sequential processing across competing
consumer groups?
Hi All,
New to kafka and would love some feedback on how to think about a solution for
this kind of flow:
So, sequencing must be maintaine
configured to handle only Ys of its type. All Xs must
consumed sequentially, across distributed consumer groups even those fanned
out with a subkey of Y
How do I keep it sequential processing FIFO across? I know I can use a key
to sequence by X in a topic partition for sequential processing, but
Josh, make sure there is a consumer in cluster B subscribed to A.topic1.
Wait a few seconds for a checkpoint to appear upstream on cluster A, and
then translateOffsets() will give you the correct offsets.
By default MM2 will block consumers that look like kafka-console-cosumer,
so make sure you sp
Thanks again Ryanne, I didn't realize that MM2 would handle that.
However, I'm unable to mirror the remote topic back to the source cluster
by adding it to the topic whitelist. I've also tried to update the topic
blacklist and remove ".*\.replica" (since the blacklists take precedence
over the whi
Josh, if you have two clusters with bidirectional replication, you only get
two copies of each record. MM2 won't replicate the data "upstream", cuz it
knows it's already there. In particular, MM2 knows not to create topics
like B.A.topic1 on cluster A, as this would be an unnecessary cycle.
> is
Sorry, correction -- I am realizing now it would be 3 copies of the same
topic data as A.topic1 has different data than B.topic1. However, that
would still be 3 copies as opposed to just 2 with something like topic1 and
A.topic1.
As well, if I were to explicitly replicate the remote topic back to
Thanks for the clarification Ryanne. In the context of active/active
clusters, does this mean there would be 6 copies of the same topic data?
A topics:
- topic1
- B.topic1
- B.A.topic1
B topics:
- topic1
- A.topic1
- A.B.topic1
Out of curiosity, is there a reason for MM2 not emitting checkpoint
Josh, yes it's possible to migrate the consumer group back to the source
topic, but you need to explicitly replicate the remote topic back to the
source cluster -- otherwise no checkpoints will flow "upstream":
A->B.topics=test1
B->A.topics=A.test1
After the first checkpoint is emitted upstream,
Hi there,
I'm currently exploring MM2 and having some trouble with the
RemoteClusterUtils.translateOffsets() method. I have been successful in
migrating a consumer group from the source cluster to the target cluster,
but was wondering how I could migrate this consumer group back to the
original so
Hi Liam,
Thanks for the response.
As we are using Spark Structured Streaming, the commit won't happen at Kafka
Side. For checkpoint, we are using HDFS.
As we are expecting, Kafka-consumer-groups.sh CLI should return LOG-END-OFFSET
with Partition details. However, it didn't display anyth
Hi Ann,
It's common practice in many Spark Streaming apps to store offsets external
to Kafka. Especially when checkpointing is enabled.
Are you sure that the app is committing offsets to Kafka?
Kind regards,
Liam Clarke
On Thu, 9 Jul. 2020, 8:00 am Ann Pricks, wrote:
> Hi Ricardo,
>
> Thanks
Hi Ricardo,
Thanks for your kind response.
As per your suggestion, I have enabled trace and PFB the content of the log
file.
Log File Content:
[2020-07-08 18:48:08,963] INFO Registered kafka:type=kafka.Log4jController
MBean (kafka.utils.Log4jControllerRegistration$)
[2020-07-08 18:48:09,244]
users@kafka.apache.org" , Ann Pricks
Subject: Re: Consumer Groups Describe is not working
Ann,
You can try execute the CLI `kafka-consumer-groups` with TRACE enabled to dig a
little deeper in the problem. In order to do this you need to:
1. Make a copy of your `$KAFKA_HOME/etc/kafka/tools-log4j.
ay, 3 July 2020 at 4:10 PM
To: "users@kafka.apache.org"
Subject: Consumer Groups Describe is not working
Hi Team,
Today, In our production cluster, we faced an issue with Kafka (Old offsets
was getting pulled from spark streaming application) and couldn'
Ann,
You can try execute the CLI `kafka-consumer-groups` with TRACE enabled
to dig a little deeper in the problem. In order to do this you need to:
1. Make a copy of your `$KAFKA_HOME/etc/kafka/tools-log4j.properties` file
2. Set `root.logger=TRACE,console`
3. Run `export
KAFKA_OPTS
Hi Team,
Any update on this.
Regards,
Pricks
From: Ann Pricks
Date: Friday, 3 July 2020 at 4:10 PM
To: "users@kafka.apache.org"
Subject: Consumer Groups Describe is not working
Hi Team,
Today, In our production cluster, we faced an issue with Kafka (Old offsets was
getting p
Hi Team,
Today, In our production cluster, we faced an issue with Kafka (Old offsets was
getting pulled from spark streaming application) and couldn't debug the issue
using kafka_consumer_group.sh CLI.
Whenever we execute the below command to list the consumer groups, it is
working
Hello all,
I recently had the experience of using the script
kafka-consumer-groups.sh from version 2.x of Kafka on a cluster of the
same version which was serving consumers using version 0.8 or earlier
libraries, hence stored the groups and offsets in Zookeeper.
On --list of the consumer groups
> > Invalid negative offset" (see stack trace below).
> >
> > We found out that the user is using Azure EventHub For Kafka and that
> > partition offsets for consumer groups are initialized with offset -1.
> I've
> > never heard of this practice before so I th
ve reported Kafka
> Lag Exporter crashing due to an internal AdminClient exception caused by
> the response from OffsetFetchRequest: "java.lang.IllegalArgumentException:
> Invalid negative offset" (see stack trace below).
>
> We found out that the user is using Azure Event
Lag Exporter crashing due to an internal AdminClient exception caused by
the response from OffsetFetchRequest: "java.lang.IllegalArgumentException:
Invalid negative offset" (see stack trace below).
We found out that the user is using Azure EventHub For Kafka and that
partition offsets fo
Hi everyone,
we use kafka 2.3.0 from the confluent-kafka-2.11 Debian package on Debian 10.
When we want to set an offset of a consumer to a datetime, we get a timeout
error even if we use the timeout switch of the kafka-consumer-groups script:
> kafka-consumer-groups --bootstrap-ser
01:13, M. Manna wrote:
> Hello,
>
> I have recently had some message loss for a consumer group under kafka
> 2.3.0.
>
> The client I am using is still in 2.2.0. Here is how the problem can be
> reproduced,
>
> 1) The messages were sent to 4 consumer groups, 3 of them were
Hello,
I have recently had some message loss for a consumer group under kafka
2.3.0.
The client I am using is still in 2.2.0. Here is how the problem can be
reproduced,
1) The messages were sent to 4 consumer groups, 3 of them were live and 1
was down
2) When the consumer group came back online
Hi,
I am using kafka version 0.10.1 on HDP 2.6 with kerberos enabled. When I am
trying to get the list of consumer groups with below command I am getting
error message. Please advise.
[kafka@XXX ~]$ /usr/hdp/current/kafka-broker/bin/kafka-consumer-groups.sh
--bootstrap-server :6667,
in which there are 1000
> topics with single partition. Each topic will be consumed by multiple
> consumer groups, say 100 in parallel. Therefore totally there can be
> 1000*100 consumer groups consuming from kafka in parallel.
>
> My concern is whether this would have any per
Hi,
I have some doubt regarding using kafka consumer API in kafka
version 0.10.0.1.
Consider I am having a three node kafka cluster in which there are 1000
topics with single partition. Each topic will be consumed by multiple
consumer groups, say 100 in parallel. Therefore totally there can be
> -Ursprüngliche Nachricht-
> Von: Vincent Maurin
> Gesendet: Freitag, 29. März 2019 15:24
> An: users@kafka.apache.org
> Betreff: Re: Offsets of deleted consumer groups do not get deleted
> correctly
>
> Hi,
>
> You should keep the policy compact for the topic __
: Re: Offsets of deleted consumer groups do not get deleted correctly
Hi,
You should keep the policy compact for the topic __consumer_offsets This topic
stores for each group/topic/partition the offset consumed. As only the latest
message for a group/topic/partition is relevant, the policy
delete.retention.ms delay is
expired
Best regards
On Fri, Mar 29, 2019 at 2:16 PM Claudia Wegmann wrote:
> Hey there,
>
> I've got the problem that the "__consumer_offsets" topic grows pretty big
> over time. After some digging, I found offsets for consumer groups that
>
Hey there,
I've got the problem that the "__consumer_offsets" topic grows pretty big over
time. After some digging, I found offsets for consumer groups that were deleted
a long time ago still being present in the topic. Many of them are offsets for
console consumers, that ha
.
From: Eric Azama
Sent: Thursday, January 3, 2019 4:31 PM
To: users@kafka.apache.org
Subject: Re: Programmatic method of setting consumer groups offsets
Adding on to Ryanne's point, if subscribe() isn't giving your consumer all
of the partitions for a topic, that implies there are st
.
From: Eric Azama
Sent: Thursday, January 3, 2019 4:31 PM
To: users@kafka.apache.org
Subject: Re: Programmatic method of setting consumer groups offsets
Adding on to Ryanne's point, if subscribe() isn't giving your consumer all
of the partitions for a topic, that implies there are st
Adding on to Ryanne's point, if subscribe() isn't giving your consumer all
of the partitions for a topic, that implies there are still active
consumers running for that group. The consumer groups CLI command does not
allow you to modify offsets for consumer groups that have active co
hat is implemented that can modify stored offsets
> in Kafka? For example, I'm looking to set a consumer groups stored offsets
> in Kafka to a specific value. I know there is the `kafka-consumer-groups`
> CLI command, but I'm looking for a way to do so from an application
> (with
Hi,
Is there a guide or API that is implemented that can modify stored offsets in
Kafka? For example, I'm looking to set a consumer groups stored offsets in
Kafka to a specific value. I know there is the `kafka-consumer-groups` CLI
command, but I'm looking for a way to do
So can we roll segments more often? If the segments are small enough
probability of messages in a single segment reaching expiry will be higher.
However, will frequent roll-up of segments cause some side effects? Like
increased CPU, memory usage etc?
On Tue, May 29, 2018 at 11:52 PM Matthias J. Sa
About the docs:
Config `cleanup.policy` states:
> A string that is either "delete" or "compact".
> This string designates the retention policy to
> use on old log segments. The default policy> ("delete") will discard old
> segments when their
> retention time or size limit has been reached.> The
In one of my consumer application, I saw that 3 topics with 10 partitions
each were getting consumed by 5 different consumers having same consumer
group. And this application is seeing a lot of rebalances. Hence, I was
wondering about this.
On Tue, May 29, 2018 at 1:57 PM M. Manna wrote:
> topic
topic and consumer group have 1-to-many relationship. Each topic partition
will have the messages guaranteed to be in order. Consumer rebalance issues
can be adjusted based on the backoff and other params. What is exactly your
concern regarding consumer group and rebalance?
On 29 May 2018 at 08:
Hello,
Is it wise to use a single consumer group for multiple consumers who
consume from many different topics? Can this lead to frequent rebalance
issues?
you
> have at least 1 consumer per group) for 1 partition means 1K TCP connection
> which means that they have to share the available bandwidth.
> Why do you have so many consumer groups? Do you basically want to
> multicast?
>
> Viktor
>
>
> On Tue, Nov 14, 2017 at 4:18 PM, A
Hi Jeff,
I think it's also worth considering that 1K consumer (implying that you
have at least 1 consumer per group) for 1 partition means 1K TCP connection
which means that they have to share the available bandwidth.
Why do you have so many consumer groups? Do you basically want to mult
cture that would result in 5K-10K consumer
groups consuming from a single topic that has one partition.
What are the reasonable limits for the max number of consumer groups per
partition and per broker?
Can a single broker be the group coordinator for 1K+ consumer groups?
--
*Jeff W
We're considering an architecture that would result in 5K-10K consumer
groups consuming from a single topic that has one partition.
What are the reasonable limits for the max number of consumer groups per
partition and per broker?
Can a single broker be the group coordinator for 1K+ con
tting-started-with-the-new-apache-kafka-0-9-consumer-client/
--Vahid
From: Michael Scofield
To: users@kafka.apache.org
Date: 11/09/2017 10:43 PM
Subject:Questions about kafka-consumer-groups output
Hello all:
I’m using Kafka version 0.11.0.1, with the new Java consumer API
Hello all:
I’m using Kafka version 0.11.0.1, with the new Java consumer API (same
version), and commit offsets to Kafka.
I want to get the consumer lags, so I use the following operation command:
$ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9093 --describe
--group foo.test.cons
Hi All!
I am looking for a way to get a list of all the consumer groups that have
offsets stored for a particular topic.
I was thinking of making a request to describe all consumer groups, then
filtering for the groups with offsets in the topic I care about.
I noticed, however, that while
umers: 0-50 single thread consumers all in different consumer
groups
- Broker: 3
- CPU: 2 * E5 2690v4(2.6GHz 14C28T)
- RAM: 384GiB
- HD: 2 * RAID5(6+1, 1.2T), fio direct write throughput > 800MiB/s
- Network: 10GbE
Test case:
- replication factor: 3
- producer acks=all
- empty topic
istic(explanation**?lock?).*
Here are the details (single partition fan-out):
- Topic: has only 1 partition
- Producer: single thread, max.in.flight.requests.per.connection=1 (we
need in-order delivery), batch.size=40
- Consumers: 0-50 single thread consumers all in different consumer g
`through` = `to` + `stream` operation. So, the consumer-groups command
showing the "fname-stream" topic.
Use `to`, if you just want to write the output to the topic.
-- Kamal
On Mon, Aug 21, 2017 at 12:05 PM, Sachin Mittal wrote:
> Folks any thoughts on this.
> Basically I
Folks any thoughts on this.
Basically I want to know on what topics does consumer group command reports
on.
I always thought it would only be the topics streams application consumes
from and not write to.
Any inputs or any part of code I can look at to understand this better
would be helpful.
Th
We are also collecting consumer group metrics from Kafka - we didn't want
to add extra unnecessary dependencies (such as burrow, which is also
overkill for what we need), so we just run a script every minute on the
brokers that parses the output of kafka-consumer-groups.sh and uploads it
to an http
Hello,
Could you tell me if burrow or remora is compatible with ssl kafka clusters
?
Gabriel.
2017-08-16 15:39 GMT+02:00 Gabriel Machado :
> Hi Jens and Ian,
>
> Very usefuls projects :).
> What's the difference between the 2 softwares ?
> Do they support kafka ssl clusters ?
>
> Thanks,
> Gab
Hi Jens and Ian,
Very usefuls projects :).
What's the difference between the 2 softwares ?
Do they support kafka ssl clusters ?
Thanks,
Gabriel.
2017-08-13 3:29 GMT+02:00 Ian Duffy :
> Hi Jens,
>
> We did something similar to this at Zalando.
>
> https://github.com/zalando-incubator/remora
>
>
Hi friends,
Anyone noticed that calling:
./bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe
--group group1
Is way slower than calling:
./bin/kafka-consumer-groups.sh --zookeeper zk.host:2181 --describe --group
group2
Im my case first command takes about 4min, second
Hi Jens,
We did something similar to this at Zalando.
https://github.com/zalando-incubator/remora
It effectively supplies the kafka consumer group supply command as a http
endpoint.
On 12 August 2017 at 16:42, Subhash Sriram wrote:
> Hi Jens,
>
> Have you looked at Burrow?
>
> https://github.
Hi Jens,
Have you looked at Burrow?
https://github.com/linkedin/Burrow/blob/master/README.md
Thanks,
Subhash
Sent from my iPhone
> On Aug 12, 2017, at 8:55 AM, Jens Rantil wrote:
>
> Hi,
>
> I am one of the maintainers of prometheus-kafka-consumer-group-exporter[1],
> which exports consumer
Hi,
I am one of the maintainers of prometheus-kafka-consumer-group-exporter[1],
which exports consumer group offsets and lag to Prometheus. The way we
currently scrape this information is by periodically executing
`kafka-consumer-groups.sh --describe` for each group and parse the output.
Recently
gt; 2017-07-28 18:28 GMT+02:00 Vahid S Hashemian >:
> >
> > > Hi Gabriel,
> > >
> > > I have yet to experiment with enabling SSL for Kafka.
> > > However, there are some good documents out there that seem to cover it.
> > > Examples:
> >
Hi,
I am executing following command
bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server
localhost:9092 --describe --group new-part-advice
It gives output like
GROUP TOPIC PARTITION
CURRENT-OFFSET LOG-END-OFFSET LAG OWNER
new-par
;
Linkedin : paolopatierno<http://it.linkedin.com/in/paolopatierno>
Blog : DevExperience<http://paolopatierno.wordpress.com/>
From: Tom Bentley
Sent: Thursday, August 3, 2017 10:47 AM
To: users@kafka.apache.org
Subject: Re: Problems with SSL and consumer gro
ey have been consumed or not.
>
> I noticed that I cannot remove the consumer from a group because
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer then denies
> the consumer's requests.
>
> Is there a way to view all messages, while still using consumer
have been consumed or not.
I noticed that I cannot remove the consumer from a group because
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer then denies
the consumer's requests.
Is there a way to view all messages, while still using consumer groups, an
access control list, an
that seem to cover it.
> > Examples:
> > *
> > https://www.confluent.io/blog/apache-kafka-security-
> > authorization-authentication-encryption/
> > *
> > http://coheigea.blogspot.com/2016/09/securing-apache-kafka-
> > broker-part-i.html
> >
> >
www.confluent.io/blog/apache-kafka-security-
> authorization-authentication-encryption/
> *
> http://coheigea.blogspot.com/2016/09/securing-apache-kafka-
> broker-part-i.html
>
> Is there anything specific about the SSL and consumer groups that you are
> having issues
-apache-kafka-broker-part-i.html
Is there anything specific about the SSL and consumer groups that you are
having issues with?
Thanks.
--Vahid
From: Gabriel Machado
To: users@kafka.apache.org
Date: 07/28/2017 08:40 AM
Subject:Re: kafka-consumer-groups tool with SASL_PLAINTEXT
5/31/kafka-acls-in-practice/
I think it covers what you'd like to achieve. If not, please advise.
Thanks.
--Vahid
From: Meghana Narasimhan
To: users@kafka.apache.org
Date: 07/24/2017 01:56 PM
Subject: kafka-consumer-groups tool with SASL_PLAINTEXT
Hi,
What is the correct
Thanks, Vahid ! Nice documentation. All the tools were working fine except
for the kafka-consumer-groups --list which is what I was struggling to get
working. Realized I had missed the cluster permissions for the user. It
looks good now.
Thanks,
Meghana
On Mon, Jul 24, 2017 at 5:14 PM, Vahid S
users@kafka.apache.org
Date: 07/24/2017 01:56 PM
Subject:kafka-consumer-groups tool with SASL_PLAINTEXT
Hi,
What is the correct way to use the kafka-consumer-groups tool with
SASL_PLAINTEXT security enabled ?
The tool seems to work fine with PLAINTEXT port but not with
SASL_PLAINTEXT. Can
Hi,
What is the correct way to use the kafka-consumer-groups tool with
SASL_PLAINTEXT security enabled ?
The tool seems to work fine with PLAINTEXT port but not with
SASL_PLAINTEXT. Can it be configured to work with SASL_PLAINTEXT ? If so
what permissions have to enabled for it ?
Thanks,
Meghana
It will not interfere. And this is independent of manual partition
assignment or topic subscription. If you have different consumer
group-ids it's independent of each other.
-Matthias
On 7/12/17 11:21 PM, venkata sastry akella wrote:
> Hi
> Can I have a one consumer group with automatic subcrip
Hi
Can I have a one consumer group with automatic subcription and one group
with manual assignment of partitions. To explain the scenario more, have
a topic1 and several consumer processes are using group1 and each of the
consumers in the groups got partitions assigned automatically by kafka.
For
which wraps
librdkafka.
[bin] $ ./kafka-consumer-groups --bootstrap-server localhost:9092 --describe
--group node-red-rdkafka-groupid
Note: This will only show information about consumers that use the Java
consumer API (non-ZooKeeper-based consumers).
TOPIC PARTITION
ding to a topic partition even
> if there is no active consumer consuming from it.
>
> I hope this helps.
> --Vahid
>
>
>
>
> From: Jerry George
> To: users@kafka.apache.org
> Date: 05/26/2017 06:55 AM
> Subject:Trouble with querying offsets when usin
rying offsets when using new consumer
groups API
Hi
I had question about the new consumer APIs.
I am having trouble retrieving the offsets once the consumers are
*disconnected* when using new consumer v2 API. Following is what I am
trying to do,
*bin/kafka-consumer-groups.sh -new-consumer --boot
It is definitely expected behavior that the new consumer version of
kafka-consumer-groups.sh —describe only returns metadata for ‘active’ members.
It will print an error message if the consumer group you provide has no active
members.
https://github.com/confluentinc/kafka/blob/trunk/core/src/ma
Hi Abhimanyu,
No, actually waiting for someone with operational experience to reply on
the list. Thank you for bumping the question though :)
If anyone in the list has experience increasing the retention or if this is
expected behaviour, could kindly suggest an alternative?
Regards,
Jerry
On S
Hi Jerry,
I am also facing the same issue. Did you found the solution?
Regards,
Abhimanyu
On Fri, May 26, 2017 at 7:24 PM, Jerry George wrote:
> Hi
>
> I had question about the new consumer APIs.
>
> I am having trouble retrieving the offsets once the consumers are
> *disconnected* when using
Hi
I had question about the new consumer APIs.
I am having trouble retrieving the offsets once the consumers are
*disconnected* when using new consumer v2 API. Following is what I am
trying to do,
*bin/kafka-consumer-groups.sh -new-consumer --bootstrap-server kafka:9092
--group group --describe*
all consumer groups. For the offset for each
consumer-group, we find it can use position method.
Here is our code:
consumer = confluent_kafka.Consumer(conf)
consumer.subscribe(['xxx'])
p = confluent_kafka.TopicPartition("xxx", 1)
print consumer.position([p])
result:
[Topi
Hi All,
We're using kafka 0.10.0.0 and just encountered a weird issue I'd be happy
to get some help with.
Seems like we can't query active consumer groups using the
kafka-consumer-groups.sh script. Even more, listing all the active consumer
groups usually results in empty resp
Is it possible that using the same group name for two topics could cause a
conflict?
I have a situation where Im seeing vast numbers of records (more than 2x)
get duplicated in a topic. I was looking at consumer lag using
'kafka-consumer-groups ... --new-consumer' and noticed that I h
return anything?
Thanks,
Sumit
On Wed, Feb 8, 2017 at 10:38 PM, R Krishna wrote:
> You can run the same class executed in the scripts.
> On Feb 8, 2017 8:50 AM, "Sumit Maheshwari" wrote:
>
> > Hi,
> >
> > Currently in 0.10 we can get the information abou
You can run the same class executed in the scripts.
On Feb 8, 2017 8:50 AM, "Sumit Maheshwari" wrote:
> Hi,
>
> Currently in 0.10 we can get the information about the consumer groups and
> respective lag using the kafka-consumer-groups.sh.
> Is there a way to achieve th
Hi,
Currently in 0.10 we can get the information about the consumer groups and
respective lag using the kafka-consumer-groups.sh.
Is there a way to achieve the same programatically in java?
Thanks,
Sumit
Jeff Widman wrote:
> We hit an error in some custom monitoring code for our Kafka cluster where
> the root cause was zookeeper was storing for some partition offsets for
> consumer groups, but those partitions didn't actually exist on the brokers.
>
> Apparently in the past, so
We hit an error in some custom monitoring code for our Kafka cluster where
the root cause was zookeeper was storing for some partition offsets for
consumer groups, but those partitions didn't actually exist on the brokers.
Apparently in the past, some colleagues needed to reset a stuck cl
er 10.211.16.215 --group groupX --describe,
> I get the below error. So none of the consumer groups seem to have a
> coordinator now.
> Error while executing consumer group command This is not the correct
> coordinator for this group.
> org.apache.kafka.common.errors.NotCoordinatorForGr
The problem still persists. But note that I was running old consumer (Zk
based) to describe consumers. Running ./kafka-consumer-groups.sh
kafka-groups.sh --bootstrap-server 10.211.16.215 --group groupX --describe,
I get the below error. So none of the consumer groups seem to have a
coordinator now
he
> compaction is still in progress. And I get this for most consumer groups.
>
> Any clues how to fix this ?
>
> Regards,
> Sathya
>
>
>
>
>
s for most consumer groups.
Any clues how to fix this ?
Regards,
Sathya
1 - 100 of 146 matches
Mail list logo