ednesday, February 15, 2023 9:31 PM
To: users@kafka.apache.org
Subject: Question regarding Kafka compaction and tombstone
Hi,
I am using Kafka 3.2 (in windows) and for a topic i to send tombstone records.
Everything was ok but i always see last value for the key (even i see null
Hi,
I am using Kafka 3.2 (in windows) and for a topic i to send tombstone records.
Everything was ok but i always see last value for the key (even i see null
records present after delete.retention.ms period)
Example
Key1 value1
Key2 value2
Key1 - null record - tombstone record
and so on
I
1. Is kafka 2.3.0 going end of life ,If yes then what is the expected date?
-> Kafka supports last 3 releases.
REF:
https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy
?
2. Is kafka 3.1.0 backward compatible to 2.3.0?
-> Since 2.3 to
+1
Me too, We consider upgrading Kafka to 3.X from Kafka 2.X, but don't know
the compatibility.
thx
Ankit Saran 于2022年8月23日周二 22:21写道:
> Hi Team,
> We are planning to upgrade kafka version from 2.3.0 to 3.1.0 , We have
> below queries regarding the same
>
> 1. Is kafka 2.3.0 going end of life
Hi Team,
We are planning to upgrade kafka version from 2.3.0 to 3.1.0 , We have
below queries regarding the same
1. Is kafka 2.3.0 going end of life ,If yes then what is the expected date?
2. Is kafka 3.1.0 backward compatible to 2.3.0?
Please help us with the above queries, Thanks in advance.
Thanks for your reply.
Replication factor is 2 and min.insync.replicas is default (1)
I will get back to you on the producer's ack settings.
Regards,
Dhiraj
On Sun, Jun 5, 2022 at 1:20 PM Liam Clarke-Hutchinson
wrote:
> Off the top of my head, it looks like it lost network connectivity to
Off the top of my head, it looks like it lost network connectivity to some
extent.
Question - what settings were used for topics like efGamePlay? What is min
insync replicas, replication factor, and what acks settings is the producer
using?
Cheers,
Liam
On Fri, 3 Jun 2022 at 22:55, dhiraj
Hi all,
Recently we faced an issue with one of our production kafka clusters:
- It is a 3 node cluster
- kafka server version is 1.0
*Issue*:
One of the brokers had some problem resulting in the following:
1. The broker lost leadership of all of the topic-parttions
2. However the kafka server
t;
> Tel. +49 711 811-49893
>
> ► Take a look: https://bgn.bosch.com/alias/bci
>
>
>
> -Ursprüngliche Nachricht-
> Von: Bruno Cadonna
> Gesendet: Dienstag, 19. Mai 2020 11:42
> An: Users
> Betreff: Re: Question regarding Kafka Streams Global State Store
>
> Hi Georg,
>
11:42
An: Users
Betreff: Re: Question regarding Kafka Streams Global State Store
Hi Georg,
local state stores in Kafka Streams are backed by a Kafka topic by default. So,
if the instance crashes the local state store is restored from the local state
directory. If the local state directory
regarding Kafka Streams Global State Store
Hi Georg,
local state stores in Kafka Streams are backed by a Kafka topic by default. So,
if the instance crashes the local state store is restored from the local state
directory. If the local state directory is empty or does not exist the local
state
>
> Georg Schmidt-Dumont
> BCI/ESW17
> Bosch Connected Industry
>
> Tel. +49 711 811-49893
>
> ► Take a look: https://bgn.bosch.com/alias/bci
>
>
>
> -Ursprüngliche Nachricht-
> Von: Bruno Cadonna
> Gesendet: Dienstag, 19. Mai 2020 10:52
> An: Users
-Dumont
BCI/ESW17
Bosch Connected Industry
Tel. +49 711 811-49893
► Take a look: https://bgn.bosch.com/alias/bci
-Ursprüngliche Nachricht-
Von: Bruno Cadonna
Gesendet: Dienstag, 19. Mai 2020 10:52
An: Users
Betreff: Re: Question regarding Kafka Streams Global State Store
Hi Georg
Hi Georg,
>From your description, I do not see why you need to use a global state
instead of a local one. Are there any specific reasons for that? With
a local state store you would have the previous record immediately
available.
Best,
Bruno
On Tue, May 19, 2020 at 10:23 AM Schmidt-Dumont Georg
Good morning,
I have setup a Kafka Streams application with the following logic. The incoming
messages are validated and transformed. The transformed messages are then
published to a global state store via topic A as well as to an additional topic
A for consumption by other applications
Dear sir,
I am new to Apache Kafka. I am developing an application in which Apache
kafka is working as broker. I am producing random data using Spring Boot
application and inserting it to PostgreSQL's TimeScale Database using
Telegraf plugin and Kafka broker.
While I am running my application and
Hi all,
We are having a query regarding memory consumption on kafka scale out. It would
be very helpful if you can give suggestion/solution for the below query.
We are running kafka as docker container on kubernetes.
Memory limit of 4GiB is configured for Kafka broker POD. With some large load
Hi,
Prior I was using Kafka version 1.1.1 and currently we are planning on
migrating to version 2.0.0. But I am facing issues as there are lot of
classes which has been removed.
kafka.api.TopicMetadata;
kafka.client.ClientUtils;
kafka.consumer.ConsumerConfig;
Hi,
i am trying to create "n" numbers of partitions for a single broker and those
should be keyed partitions. I am able to push message in my first
keyed-partition by taking ((partition-size) - 1) through custom-partitioner
class, so in this case the first-keyed partition will the be last
Hello,
I'm unable to start the zookeeper as well as Kafka. I tried by downloading the
zookeeper separately & now I'm able to start the zookeeper.
Coming to Kafka-server, It throws an error message "wmic is not recognized as
an internal or external command", I resolved it by adding the
For #1, there is record-size-avg metric
Not sure about #2
On Thu, Feb 8, 2018 at 10:28 AM, Pawan K wrote:
> Hi,
> I am currently trying to research answers for the following questions. Can
> you please let me know where/how could i find these in the configuration.
>
>
Hi,
I am currently trying to research answers for the following questions. Can
you please let me know where/how could i find these in the configuration.
1) Average record size (KB) for data written to each kafka topic
2) Average number of events written to each kafka topic per day
Thanks,
Pavan
Thanks, Faraz.
I am using its Java API. It does not seem to provide such method to the
consumer.
On Wed, Nov 22, 2017 at 2:45 PM, Faraz Mateen wrote:
> Not sure which client you are using.
> In kafka-python, consumer.config returns a dictionary with all consumer
> properties.
Hello team,
I wanted to know if there is some way I can retrieve consumer properties
from the Kafka Consumer. for example, if at runtime, I want to know the
group id of a particular consumer, in case multiple consumers are running
in my application.
Thanks & Regards,
Simarpreet
Images didn't come thru.
Consider using third party website.
On Tue, Oct 17, 2017 at 9:36 PM, Pavan Patani
wrote:
> Hello,
>
> Previously I was using old version of Kafka-manager and it was showing
> "Producer Message/Sec and Summed Recent Offsets" parameters in
Hello,
Previously I was using old version of Kafka-manager and it was showing
"Producer Message/Sec and Summed Recent Offsets" parameters in topics as
below.
[image: Inline image 1]
Currently I have installed kafka-manager-1.3.3.14 and now I can not see
these two "Producer Message/Sec and
Hi,
I have a 3 node Kafka Broker cluster.
I have created a topic and the leader for the topic is broker 1(1001). And
the broker got died.
But when I see the information in zookeeper for the topic, I see the leader
is still set to broker 1 (1001) and isr is set to 1001. Is this a bug in
kafka, as
The lag numbers are never going to be exactly the same as what the CLI tool
returns, as the broker is queried on an interval for the offset at the end
of each partition. As far as crashing goes, I’d be interested to hear about
specifics as we run it (obviously) and don’t have that problem. It
Hey Abhimanyu,
Not directly answering your questions but in the past we used burrow at my
current company and we had a horrible time with it. It would crash daily
and its lag metrics were very different to what was returned when you would
run the kafka-consumer-group describe command as you
Hi ,
I am using burrow to monitor kafka Lags and I am having following queries :
1.On hitting the API /v2/kafka/local/consumer/group1/lag I am not able to
view all the topics details present in that group and getting complete:
false in the above JSON. What does this mean? Below mentioned is the
Hi,
I am a product engineer at Go-Jek and we are using Kafka in our Data
Engineering team. I came across this slide share presentation on
https://www.slideshare.net/JonBringhurst/kafka-audit-kafka-meetup-january-27th-2015
I would like to know if this Kafka Audit tool is open sourced as well /
Hello Sir:
My name is Chieh-Chun Chang and and we have a problem about our Kafka prod
cluster.
Kafka client version
kafka-clients-0.9.0-kafka-2.0.0.jar
Kafka version
kafka_2.10-0.9.0-kafka-2.0.0.jar
Our kafka broker cluster is experiencing under replicated partitions
problems and I found out
Hi,
I am using Kafka Server kafka_2.10-0.10.1.0.
I am trying to integrate it with Kerberos server.
I followed steps mentioned at
http://kafka.apache.org/documentation.html#security_sasl
However I am not able to start the server and getting the following error.
[2016-11-16 09:47:46,578] INFO
; Date: 10/9/16 3:58 AM (GMT-08:00) To: users@kafka.apache.org Subject:
> Re: Regarding Kafka
> I did that but i am getting confusing results
>
> e.g
>
> I have created 4 Kafka Consumer threads for doing data analytic, these
> threads just wait for Kafka messages to get consu
in librdkafka so
others may be about to help you better on that.
//h...@confluent.io
Original message From: Abhit Kalsotra <abhit...@gmail.com>
Date: 10/9/16 3:58 AM (GMT-08:00) To: users@kafka.apache.org Subject: Re:
Regarding Kafka
I did that but i am getting con
ame key
> will be guaranteed to go to the same partition and therefore be in order
> for whichever consumer gets that partition.
>
>
> //h...@confluent.io
> Original message From: Abhit Kalsotra <abhit...@gmail.com>
> Date: 10/9/16 12:39 AM (GMT-08:00) To: user
Date: 10/9/16 12:39 AM (GMT-08:00) To: users@kafka.apache.org Subject: Re:
Regarding Kafka
What about the order of message getting received ? If i don't mention the
partition.
Lets say if i have user ID :4456 and I have to do some analytics at the
Kafka Consumer end and at my consumer end
PM (GMT-08:00) To: users@kafka.apache.org Subject:
> Re: Regarding Kafka
> Hans
>
> Thanks for the response, yeah you can say yeah I am treating topics like
> partitions, because my
>
> current logic of producing to a respective topic goes something like this
>
> RdKafka:
that they are automatically
distributed out over the available partitions.
//h...@confluent.io
Original message From: Abhit Kalsotra <abhit...@gmail.com>
Date: 10/8/16 11:19 PM (GMT-08:00) To: users@kafka.apache.org Subject: Re:
Regarding Kafka
Hans
Thanks for the response, yeah y
Hans
Thanks for the response, yeah you can say yeah I am treating topics like
partitions, because my
current logic of producing to a respective topic goes something like this
RdKafka::ErrorCode resp = m_kafkaProducer->produce(m_kafkaTopic[whichTopic],
Why do you have 10 topics? It seems like you are treating topics like
partitions and it's unclear why you don't just have 1 topic with 10, 20, or
even 30 partitions. Ordering is only guaranteed at a partition level.
In general if you want to capacity plan for partitions you benchmark a single
Guys any views ?
Abhi
On Sat, Oct 8, 2016 at 4:28 PM, Abhit Kalsotra wrote:
> Hello
>
> I am using librdkafka c++ library for my application .
>
> *My Kafka Cluster Set up*
> 2 Kafka Zookeper running on 2 different instances
> 7 Kafka Brokers , 4 Running on 1 machine and 3
Hello
I am using librdkafka c++ library for my application .
*My Kafka Cluster Set up*
2 Kafka Zookeper running on 2 different instances
7 Kafka Brokers , 4 Running on 1 machine and 3 running on other machine
Total 10 Topics and partition count is 3 with replication factor of 3.
Now in my case
...@surescripts.com
Connect with us: Twitter I LinkedIn I Facebook I YouTube
-Original Message-
From: Amit K [mailto:amitk@gmail.com]
Sent: Monday, July 18, 2016 8:55 PM
To: users@kafka.apache.org
Subject: Regarding kafka partition and replication
Hi,
I have 3 nodes, each with 3 brokers, Kafka
Hi,
I have 3 nodes, each with 3 brokers, Kafka cluster along with 3 zookeeper
cluster. So total 9 brokers spread across 3 different machines. I am
adhered to Kafka 0.9.
In order to optimally use the infrastructure for 2 topics (as of now, is
not expected to grow drastically in near future), I am
hi!
please have a look at this article. it help me touse the log compaction
feature mechanism
i hope thtat it helps.
regards,
florin
http://www.shayne.me/blog/2015/2015-06-25-everything-about-kafka-part-2/
On Thursday, May 5, 2016, Behera, Himansu (Contractor) <
Hi,
-- I am new to kafka and zookeeper. I have implemented my test environment
with one Zookeeper node and 3 Kafka nodes. Now I want to increase my 1
zookeeper to 3 nodes ensemble.
-- I am continuously producing messages to one of the topic with python
script while the msgs producing in progress
You can fetch messages by offset.
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-FetchRequest
On Fri, Feb 26, 2016 at 7:23 AM rahul shukla
wrote:
> Hello,
> I am working SNMP trap parsing project in my acadmic.
Hello,
I am working SNMP trap parsing project in my acadmic. i am using kafka
message system in my project. Actully i want to store trap object in
kafka which is getting from snmp agent, and retrieve that object on
another side for further processing.
So, my query is , IS there any way to store a
What do you mean by Broker is down? Has the process shutdown and exited or
is the broker not reachable?
Thanks,
Mayuresh
On Wed, Aug 5, 2015 at 12:12 PM, Chinmay Soman chinmay.cere...@gmail.com
wrote:
Hey guys,
We're using Kafka version 0.8.2.0 and using the Java producer
(KafkaProducer)
After digging in the 0.8.2 code, it seems like the callback is not getting
invoked since 'handleDisconnections' is not adding a disconnected
ClientResponse to the list of responses. I do see a 'Node 0 disconnected'
message. However, I don't see a 'Cancelled request due to node being
disconnected'
Your producer might get stuck if the kafka Broker becomes unreachable since
there is no socketTimeout on new producer. We are adding that in KAFKA-2120.
Thanks,
Mayuresh
On Wed, Aug 5, 2015 at 3:47 PM, Chinmay Soman chinmay.cere...@gmail.com
wrote:
After digging in the 0.8.2 code, it seems
Hi,
just out of curiosity and because of Eugene's email, I browsed
Kafka-1477 and it talks about SSL alot. So I thought I might throw in
this http://tools.ietf.org/html/rfc7568 RFC. It basically says move away
from SSL now and only do TLS. The title of the ticket still mentions TLS
but
shouldn't the new consumer api be removed from the 0.8.2 code base then?
On Fri, Jan 23, 2015 at 10:30 AM, Joe Stein joe.st...@stealth.ly wrote:
The new consumer is scheduled for 0.9.0.
Currently Kafka release candidate 2 for 0.8.2.0 is being voted on.
There is an in progress patch to the
The new consumer api is actually excluded from the javadoc that we generate.
Thanks,
Jun
On Mon, Jan 26, 2015 at 11:54 AM, Jason Rosenberg j...@squareup.com wrote:
shouldn't the new consumer api be removed from the 0.8.2 code base then?
On Fri, Jan 23, 2015 at 10:30 AM, Joe Stein
Matve wr should add to the documentation experimental so folks that don't
know understand.
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop
Hi Team,
I was playing around with your recent release 0.8.2-beta.
Producer worked fine whereas new consumer did not.
org.apache.kafka.clients.consumer.KafkaConsumer
After digging the code I realized that the implementation for the same is
not available. Only API is present.
Could you please
Hi ,
I have a general query -
As per the code in Kafka producer the serialization happens before
partitioning , Is my understanding correct ? If yes whats the reason for it
?
Regards,
Liju John
hi,
I have the following doubts regarding some kafka config parameters:
For example if I have a Throughput topic with replication factor 1 and a
single partition 0,then i will see the following files under
/tmp/kafka-logs/Throughput_0:
.index
.log
Hi,
Currently we trying to configure Kafka in our system for pulling messages
from Queues.
We have multiple consumers( we might want to add consumers if load on one
consumer increases) which need to receive and process messages from a Kafka
queue. Based on my understanding, under a single
Hi Madhavi,
Dynamically re-balance partitions based on processing efficiency and load
is a bit tricky to do in the current consumer since rebalances will only be
triggered by consumer membership change or topic/partition change. For your
case you would probably stop the slow consumer so that a
Hi All,
I am new to Kafka broker and realized that Kafka broker does not enforce client
authentication at connection or message level.
To avoid DOS attack, we are planning to implement security certificate at
client connection level, not at message level, so that
we can authenticate client
62 matches
Mail list logo