Dear Kafka experts , Could anyone having this data share the details please
On Wed, Apr 3, 2024 at 3:42 PM Kafka Life wrote:
> Hi Kafka users
>
> Does any one have a document or ppt that showcases the capabilities of
> Kafka along with any cost management capability?
> i have
Hi Kafka users
Does any one have a document or ppt that showcases the capabilities of
Kafka along with any cost management capability?
i have a customer who is still using IBM MQM and rabbit MQ. I want the
client to consider kafka for messaging and data streaming. I wanted to seek
your expert
Hi Stan,
I wanted to share some updates about the bugs you shared earlier.
- KAFKA-14616: I've reviewed and tested the PR from Colin and have observed
the fix works as intended.
- KAFKA-16162: I reviewed Proven's PR and found some gaps in the proposed fix.
I've
therefore raised https
Apologies, I duplicated KAFKA-16157 twice in my previous message. I intended to
mention KAFKA-16195
with the PR at https://github.com/apache/kafka/pull/15262 as the second JIRA.
Thanks,
Gaurav
> On 26 Jan 2024, at 15:34, ka...@gnarula.com wrote:
>
> Hi Stan,
>
> I wanted to sha
Hi Stanislav,
Thanks for bringing these JIRAs/PRs up.
I'll be testing the open PRs for KAFKA-14616 and KAFKA-16162 this week and I
hope to have some feedback
by Friday. I gather the latter JIRA is marked as a WIP by Proven and he's away.
I'll try to build on his work in the meantime
Dear Kafka Experts
How can we check for a particular offset number in Apache kafka 3.2.3
version.Could you please share some light.
The kafka_console_consumer tool is throwing class not found error.
./kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
--topic your-topic
--group
Many thanks Samuel. Will go thru this.
On Tue, Apr 25, 2023 at 9:03 PM Samuel Delepiere <
samuel.delepi...@celer-tech.com> wrote:
> Hi,
>
> I use a combination of the Prometheus JMX exporter (
> https://github.com/prometheus/jmx_exporter) and the Prometheus Kafka
> exporte
Dear Kafka Experts
Could you please suggest good metrics exporter for consumer lag and topic
level metrics apart from Linkedin kafka burrow for the kafka broker cluster.
Hi Kafka , zookeeper experts
Is it possible to upgrade the 3.4.14 version of zookeeper cluster in a
rolling fashion (one by one node) to 3.5.7 zookeeper version. Would the
cluster work with a possible combination of 3.4.14 and 3.5.7 . Please
advise .
Any help from these experts ?
On Sat, Apr 8, 2023 at 2:23 PM Kafka Life wrote:
> Hello Kafka Experts
>
> Need a help . Currently the grafana agent
> triggering kafka/3.2.3/config/kafka_metrics.yml is sending over 5 thousand
> metrics. Is there a way to limit these many met
Hello Kafka Experts
Need a help . Currently the grafana agent
triggering kafka/3.2.3/config/kafka_metrics.yml is sending over 5 thousand
metrics. Is there a way to limit these many metrics to be sent and send
only what is required .Any pointers or such customized script is much
appreciated.kafka
Hi experts.. any pointers or guidance for this
On Wed, Apr 5, 2023 at 8:35 PM Kafka Life wrote:
> Respected Kafka experts/managers
>
> Do anyone have Subject of work -Activities related to Kafka cluster
> management for Apache or Confluent kafka . Something to assess
Respected Kafka experts/managers
Do anyone have Subject of work -Activities related to Kafka cluster
management for Apache or Confluent kafka . Something to assess and propose
to an enterprise for kafka cluster management. Request you to kindly share
any such documentation please.
This is really great information Paul . Thank you .
On Tue, Mar 28, 2023 at 4:01 AM Brebner, Paul
wrote:
> I have a recent 3 part blog series on Kraft (expanded version of ApacheCon
> 2022 talk):
>
>
>
>
> https://www.instaclustr.com/blog/apache-kafka-kraft-abandons
Many thanks Joseph for your response
On Mon, Mar 27, 2023 at 4:50 PM Josep Prat
wrote:
> Hello there,
>
> You can find the general policy here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy
>
>
>
>
> Hello Kafka experts
>
> where can i see the end of support for apache kafka versions? i would like
> to know about 0.11 Kafka version on when was this deprecated
>
Hello Kafka experts
where can i see the end of support for apache kafka versions? i would like
to know about 0.11 Kafka version on when was this deprecated
Hello Kafka experts
Is there a way where we can have Kafka Cluster be functional serving
producers and consumers without having Zookeeper cluster manage the
instance .
Any particular version of kafka for this or how can we achieve this please
Dear Kafka Community
we are facing an Issue with kafka-console-consumer.sh /
kafka-console-producer.sh (which both use the native java client) and the
bootstrap behavior in conjunction with haproxy.
Behavior:
After the bootsrap process the java client doesn't disconnect and just keeps
Dear Kafka Community
we are facing an Issue with kafka-console-consumer.sh /
kafka-console-producer.sh (which both use the native java client) and the
bootstrap behavior in conjunction with haproxy.
Behavior:
After the bootsrap process the java client doesn't disconnect and just keeps
Thank you Sunil ,Peter Raph and Richard for your kind inputs.Much
appreciated.
On Wed, Aug 17, 2022 at 6:46 AM sunil chaudhari
wrote:
> You can try this, if you know what prometheus and how its installed
> configured.
>
>
> https://www.confluent.io/blog/monitor-kafka-clusters-
Hello Experts, Any info or pointers on my query please.
On Mon, Aug 15, 2022 at 11:36 PM Kafka Life wrote:
> Dear Kafka Experts
> we need to monitor the consumer lag in kafka clusters 2.5.1 and 2.8.0
> versions of kafka in Grafana.
>
> 1/ What is the correct path for JMX metr
Dear Kafka Experts
we need to monitor the consumer lag in kafka clusters 2.5.1 and 2.8.0
versions of kafka in Grafana.
1/ What is the correct path for JMX metrics to evaluate Consumer Lag in
kafka cluster.
2/ I had thought it is FetcherLag but it looks like it is not as per the
link below
Dear Luke , Thank you for your kind and prompt response.
On Mon, Apr 4, 2022 at 1:23 PM Luke Chen wrote:
> Hi,
>
> The impact for the CVE-2022-22965? Since this is a RCE vulnerability, which
> means the whole system (including Kafka and ZK) is under the attackers'
> cont
Hi Kafka Experts
Regarding the recent threat of vulnerability in spring framework ,
CVE-2022-22965 vulnerability is SpringBoot (Java) for apache kafka and
Zookeeper. Could one of you suggest how Apache kafka and zk are impacted
and what should be the ideal fix for this .
Vulnerability
Thank you Malcolm. Will go through this.
On Sat, Feb 26, 2022 at 2:22 AM Malcolm McFarland
wrote:
> Maybe this could help?
> https://github.com/dimas/kafka-reassign-tool
>
> Cheers,
> Malcolm McFarland
> Cavulus
>
>
> On Fri, Feb 25, 2022 at 9:00 AM Kafka Life
Dear Experts
do you have any solution for this please
On Tue, Feb 22, 2022 at 8:31 PM Kafka Life wrote:
> Dear Kafka Experts
>
> Does anyone have a dynamically generated Json file based on the Under
> replicated partition in the kafka cluster.
> Everytime when the URP is increa
Dear Kafka Experts
Does anyone have a dynamically generated Json file based on the Under
replicated partition in the kafka cluster.
Everytime when the URP is increased to over 500 , it is a tedious job to
manually create a Json file .
I request you to share any such dynamically generated script
Dear Kafka Experts , Need your advice please
I am running a mirror maker in kafka 2.8 to replicate a topic from kafka
0.11 instance.
The size of each partition for a topic on 0.11 is always in 5 to 6 GB but
the replicated topic in 2.8 instances is in 40 GB for the same partition.
The topic
Dear Kafka experts
i have a 10 broker kafka cluster with all topics having replication factor
as 3 and partition 50
min.in.synch replicas is 2.
One broker went down for a hardware failure, but many applications
complained they are not able to produce /consume messages.
I request you to please
Thank you Men and Ran
On Sat, Nov 6, 2021 at 7:23 PM Men Lim wrote:
> I'm currently using Kafka-gitops.
>
> On Sat, Nov 6, 2021 at 3:35 AM Kafka Life wrote:
>
> > Dear Kafka experts
> >
> > does anyone have ready /automated script to create /delete /a
Dear Kafka experts
does anyone have ready /automated script to create /delete /alter topics in
different environments?
taking Configuration parameter as input .
if yes i request you to kindly share it with me .. please
Hello Luke
i have build a new kafka environment with kafka 2.8.0
the consumer is a new consumer set up to this environment is throwing the
below error... the old consumers for the same applications for the same
environment -2.8.0 is working fine.. .
could you please advise
2021-11-02 12:25:24
Dear Kafka Experts
We have set up a group.id (consumer ) = YYY
But when tried to connect to kafka instance : i get this error message. I
am sure this consumer (group id does not exist in kafka) .We user plain
text protocol to connect to kafka 2.8.0. Please suggest how to resolve this
issue
Dear Kafka experts
when an broker is started using start script , could any of you please let
me know the sequence of steps that happens in the back ground when the node
UP
like : when the script is initiated to start ,
1/ is it checking indexes .. ?
2/ is it checking isr ?
3/ is URP being made
Thank you very much Mr. Israel Ekpo. Really appreciate it.
We are using the 0.10 version of kafka and in the process of upgrading to
2.6.1 . Planning in process and Yes, these connections to zookeepers are
for Kafka functionality.
frequently there are incidents where zookeepers get bombarded
Dear KAFKA & Zookeeper experts.
1/ What is zookeeper Throttling ? Is it done at zookeepr ? How is it set
configured ?
2/ Is it helpful ?
Dear kafka Experts
Could one of you please help to explain what this below log in broker
instance mean..and what scenarios it would occur when there is no change
done .
INFO [GroupCoordinator 9610]: Member
webhooks-retry-app-840d3107-833f-4908-90bc-ea8c394c07c3-StreamThread-2-consumer-f87c3b85
Hello Kafka experts
The consumer team is reporting issue while consuming the data from the
topic as Singularity Header issue.
Can some one please tell on how to resolve this issue.
Error looks like ;
Starting offset: 1226716
offset: 1226716 position: 0 CreateTime: 1583780622665 isvalid: true
Reddy
wrote:
> It’s better to mention which version you are using.
>
>
> Hope the following link will help you.
> https://www.programmersought.com/article/5376340/
>
>
> From: Kafka Life
> Date: Wednesday, 9 June 2021 at 4:12 PM
> To: users@kafka.apache.o
Dear Experts.. the zookeeper ensemble is creating issues in production..
zookeeper logs print as below.. can any one please suggest why this error
occurs and wht is the solution for the same.
- ERROR [CommitProcessor:1:NIOServerCnxn@178] - Unexpected Exception:
B8-4E2AA1F9FDF2>
On Tue, Jun 8, 2021 at 3:41 PM Luke Chen wrote:
> Hi,
> About the upgrade documentation, please read here:
> https://kafka.apache.org/documentation/#upgrade
>
> It should answer your 2 questions.
>
> Thanks.
> Luke
>
> On Tue, Jun 8, 2021 at 6:0
Dear Kafka Experts
1- Can any one share the upgrade plan with steps /Plan /tracker or any
useful documentation please.
2- upgrading kafka from old version of 0.11 to 2.5 .Any
suggestions/directions is highly appreciated.
Thanks
I upgraded confluent-kafka from 1.0.1 to 1.4.2. After upgrading, if I
execute 'pip install' I am getting the following error. I am using python
2.7. Has anyone else encountered this issue?
Traceback (most recent call last):import confluent_kafka as kafkaFile
"/home/sshil/virtual_envs/ma
I was running a test where kafka consumer was reading data from multiple
partitions of a topic. While the process was running I added more
partitions. It took around 5 minutes for consumer thread to read data from
the new partition. I have found this configuration
On Wed, Dec 25, 2019 at 5:51 PM Kafka Shil wrote:
> Hi,
> I am using docker compose file to start schema-registry. I need to change
> default port to 8084 instead if 8081. I made the following changes in
> docker compose file.
>
> schema-registry:
> image: confluenti
fset.positionDiff(fetchOffset), fetchStatus.fetchInfo.fetchSize)
}
}
so we can ensure that our fetchOffset’s segmentBaseOffset is not the same as
endOffset’s segmentBaseOffset,then we check our topic-partition’s segment, we
found the data in the segment is all cleaned by the ka
Oh please ignore my last reply.
I find if leaderReplica.highWatermark.messageOffset >= requiredOffset , this
can ensure all replicas’ leo in curInSyncReplicas is >= the requiredOffset.
> 在 2016年9月23日,下午3:39,Kafka <kafka...@126.com> 写道:
>
> OK, the example before is n
}
why not the code as belows,
if (minIsr <= curInSyncReplicas.size && minIsr <= numAcks) {
(true, ErrorMapping.NoError)
} else {
(true, ErrorMapping.NotEnoughReplicasAfterAppendCode)
}
Its seems that only one condition in kafka broker’s code is not en
@wangguozhang,could you give me some advices.
> 在 2016年9月22日,下午6:56,Kafka <kafka...@126.com> 写道:
>
> Hi all,
> in terms of topic, we create a topic with 6 partition,and each with 3
> replicas.
>in terms of producer,when we send message with ack -1
Hi all,
in terms of topic, we create a topic with 6 partition,and each with 3
replicas.
in terms of producer,when we send message with ack -1 using sync
interface.
in terms of brokers,we set min.insync.replicas to 2.
after we review the kafka broker’s code,we know
thanks for your answer,I know the necessity of key for compacted topics,and as
you know,__consumer_offsets is a internal compacted topic in kafka,and it’s key
is a triple of <groupid, topic, partition>,these errors are occurred when the
consumer client wants to commit group offset.
so wh
Hi,
The server log shows error as belows on broker 0.9.0.
ERROR [Replica Manager on Broker 0]: Error processing append operation
on partition [__consumer_offsets,5] (kafka.server.ReplicaManager)
kafka.message.InvalidMessageException: Compacted topic cannot accept message
without
Hi, __consumer_offsets ’s partition 7 and partition 27 leader is -1, and isr
is null,who can tell me how to recover it,thank you.
Topic: __consumer_offsets Partition: 0Leader: 3 Replicas: 3,4,5
Isr: 4,5,3
Topic: __consumer_offsets Partition: 1Leader: 4
can someone please explain why latency is so big for me?thanks
> 在 2016年6月25日,下午11:16,Jay Kreps <j...@confluent.io> 写道:
>
> Can you sanity check this with the end-to-end latency test that ships with
> Kafka in the tools package?
>
> https://apach
Hi all,
my kafka cluster is composed of three brokers with each have 8core cpu
and 8g memory and 1g network card.
with java async client,I sent 100 messages with size of 1024 bytes
per message ,the send gap between each sending is 20us,the consumer’s config is
like
@jun Rao
about this question,can you give me some suggestion?
> 在 2016年6月18日,上午11:26,Kafka <kafka...@126.com> 写道:
>
> hello,I have done a series of tests on kafka 0.9.0,and one of the results
> confused me.
>
> test enviroment:
> kafka cluster: 3 brokers,8co
od, so it will not be the bottlenecks.
so it’s not have the problem of interpretation of time.
> 在 2016年6月18日,上午11:26,Kafka <kafka...@126.com> 写道:
>
> hello,I have done a series of tests on kafka 0.9.0,and one of the results
> confused me.
>
> test enviroment:
> kafka cluster:
I send every message with timestamp, and when I receive a message,I do a
subtraction between current timestamp and message’s timestamp. then I get the
consumer’s delay.
hello,I have done a series of tests on kafka 0.9.0,and one of the results
confused me.
test enviroment:
kafka cluster: 3 brokers,8core cpu / 8g mem /1g netcard
client:4core cpu/4g mem
topic:6 partitions,2 replica
total messages:1
singal message size:1024byte
Top util, we can see 280% cpu unilization, then i use JSTACK, I
found there are 4 threads use cpu most, which show below:
"kafka-network-thread-9092-0" prio=10 tid=0x7f46c8709000 nid=0x35dd
runnable [0x7f46b73f2000]
java.lang.Thread.State: RUNNABLE
"kafka-network-thread-
If we have a kafka producer connecting to a Kafka cluster behind a hardware
Load Balancer (VIP), will producer be able to send a message to a right
partition ?
Can one of the brokers in a cluster do a broker discovery to forward the
message to ?
I guess my question is whether it makes sense
or an existing broker is removed, we need to keep the list of brokers
updated ?
Thanks
On Mon, Mar 31, 2014 at 9:22 PM, Jun Rao jun...@gmail.com wrote:
In Kafka, the broker list you provide to the producer is used only for
fetching metadata, which can be served on any broker. The producer
63 matches
Mail list logo