Hi.
I also guess the main reason for using Future was for JDK1.7 support which
is no longer necessary in the current Kafka version.
Actually, there's a KIP about this:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100829459
but it seems it's not active now.
> I wonder if it is
Dear Apache Kakfa Developers,
I'm 4-year SWE in South Korea.
I have some questions while watching Kafka Producer API.
*Why Use "Future" and Not "CompletableFuture"?*
In the case of "Future", blocking occurs when calling "*get()*", so I
thought "Computable Future" would be better when doing more
Hello Xiaochi ,
I am not sure if I have understood the problem correctly but beware the
fact that only old log segments and not the current log segment are
taken into account for deletions. So if you want the data to be deleted
in a timely manner, you also need to configure a tighter interval
Hello,
I am currently using kafka 3.1.0 with java 1.8. I have set kafka log
retention policy in the server.properties like this:
log.retention.hours=6
log.retention.bytes=5368709120
log.segment.bytes=1073741824
log.retention.check.interval.ms=30
log.cleanup.policy=delete
However, it see
ge: Slack]
<https://slackpass.io/confluentcommunity>[image: YouTube]
<https://youtube.com/confluent>
[image: Kafka Summit] <https://www.kafka-summit.org/>
On Wed, Jun 9, 2021 at 9:42 PM Calvin Chen wrote:
> Hi all
>
> I have a question about kafka acl, is it possible to limit user
Hi all
I have a question about kafka acl, is it possible to limit users to access a
topic?
For example, if person-a created kafka-user-a and grant kafka-user-a to access
topic-a, and person-b knows there is topic-a, but he doesn't know the
credential to access topic-a via kafka-user-a,
Hello Gareth,
There is a checkpoint file that records the corresponding offset of the
changelog for the state store data co-located within the state directory;
after the partition is migrated to new owners, this checkpoint file along
with the state store would not be deleted immediately but follow
Hi Guozheng,
Thanks very much again for the answers!
One follow-up on the first question. Just so I understand it, how would it
know where to continue from?
I would assume that once we repartition, the new node will own the position
in the consumer group for the relevant partition(s)
so Kafka/Zoo
Hello Gareth,
1) For this scenario, its state should be reusable and we do not need to
read from scratch from Kafka to rebuild.
2) "Warmup replicas" is just a special standby replica that is temporary,
note that if there's no partition migration needed at the moment, the
num.warmup.replicas is ac
Hi,
Thanks very much for answers to my previous questions here.
I had a couple more questions about repartitioning and I just want to
confirm my understanding.
(1) Given the following scenario:
(a) I have a cluster of Kafka stream nodes with partitions assigned to each.
(b) One node goes down.
he.org
Subject: Question about Kafka TLS
Hi,
I have a question about how TLS config at Kafka client side. Based on the
official document, if clients want to enable TLS, they must put
ssl.truststore.location in the client config in where there is a JKS file to
hold the trust store. My question is t
Hi Team,
I was trying to leverage some enhancements in Kafka connect in 2.0.0
release as specified by this KIP
https://cwiki.apache.org/confluence/display/KAFKA/KIP-298%3A+Error+Handling+in+Connectand
I came across this good blog post by Robin
https://www.confluent.io/blog/kafka-connect-deep-dive-
Hi Thomas,
We recently fixed a bug
https://issues.apache.org/jira/browse/KAFKA-8191 , which allows users to
configure their own KeyManager, TrustManager. One can implement these
KeyManagers and pass them as configs and these Keymanagers can make a call to
service to fetch a certif
Hi,
I have a question about how TLS config at Kafka client side. Based on the
official document, if clients want to enable TLS, they must put
ssl.truststore.location in the client config in where there is a JKS file to
hold the trust store. My question is that is this config mandatory? Is there
PM
> To: users@kafka.apache.org
> Subject: Re: question about kafka topic
>
> Writes, must always go to the partition leader, ie, if an error occurs, the
> message cannot simply be delivered to a different broker.
>
> However, if a broker is in bad shape, it could get leaderhship r
Sorry I did not mention one point. I am seeing this error on consumer side.
-Original Message-
From: Matthias J. Sax [mailto:matth...@confluent.io]
Sent: Sunday, March 10, 2019 11:01 PM
To: users@kafka.apache.org
Subject: Re: question about kafka topic
Writes, must always go to the
s, it won't make sense to retry because the same error would be
returned by the broker on retries, too.
Hope this helps.
-Matthias
On 3/7/19 11:27 PM, Calvin Chen wrote:
> Hi,
> I have a question about kafka topic, recently we face kafka client sending
> message to kafka top
Hi,
I have a question about kafka topic, recently we face kafka client sending
message to kafka topic issue, got error about offset, and client can not send
message to kafka cluster.
My question is, since we configure kafka servers to one cluster, when cluster
get message, will it try it best
Hi,
We are using a dockerized version of Kafka :
https://github.com/wurstmeister/kafka-docker
We deploy using ECS on AWS machines.
We have "tests" in our system, and for each "test" we have new Kafka and
Zookeeper brokers (for security reasons, it's a must).
We run on EC2, and using Elastic IP(i
The count is stored in RocksDB which is persisted to disk. It is not
in-memory unless you specifically use an InMemoryStore.
On Wed, 1 Aug 2018 at 12:53 Kyle.Hu wrote:
> Hi, bosses:
> I have read the word count demo of Kafka Stream API, it is cool that
> the Kafka Stream keeps the status,
Hi, bosses:
I have read the word count demo of Kafka Stream API, it is cool that the
Kafka Stream keeps the status, and I have a question about it, Weather it would
cause memory problems when the keys accumulate a lot ?
Yes that is correlated, thanks for the reminder.
I've updated the JIRA to reflect your observations as well.
Guozhang
On Wed, Mar 28, 2018 at 12:41 AM, Mihaela Stoycheva <
mihaela.stoych...@gmail.com> wrote:
> Hello Guozhang,
>
> Thank you for the answer, that could explain what is happening.
Hello Guozhang,
Thank you for the answer, that could explain what is happening. Is it
possible that this is related in some way to
https://issues.apache.org/jira/browse/KAFKA-6538?
Mihaela
On Wed, Mar 28, 2018 at 2:21 AM, Guozhang Wang wrote:
> Hello Mihaela,
>
> It is possible that when you h
Hello Mihaela,
It is possible that when you have caching enabled, the value of the record
has already been serialized before sending to the changelogger while the
key was not. Admittedly it is not very friendly for trouble-shooting
related log4j entries..
Guozhang
On Tue, Mar 27, 2018 at 5:25
Hello,
I have a Kafka Streams application that is consuming from two topics and
internally aggregating, transforming and joining data. One of the
aggregation steps is adding an id to an ArrayList of ids. Naturally since
there was a lot of data the changelog message became too big and was not
sent
: Wednesday, September 20, 2017 9:28:17 AM
Subject: Re: Question about Kafka
Producer configuration?
On Wed, Sep 20, 2017, 2:50 PM MAHA ALSAYASNEH <
maha.alsayas...@univ-grenoble-alpes.fr> wrote:
> Hello,
>
> Any suggestion regarding this msg:
> " org.apache.kafka.com
ms has passed since batch creation plus linger time "
>
> Thanks in advance
> Maha
>
>
> From: "MAHA ALSAYASNEH"
> To: "users"
> Sent: Tuesday, September 19, 2017 6:18:25 PM
> Subject: Re: Question about Kafka
>
> Well I
t: Tuesday, September 19, 2017 6:18:25 PM
Subject: Re: Question about Kafka
Well I kept the defualt:
log.retention.hours=168
Here are my broker configurations:
# Server Basics #
# The id of the broker. This must be set to a
;
Sent: Tuesday, September 19, 2017 6:11:05 PM
Subject: Re: Question about Kafka
What is the retention time on the topic you are publishing to?
From: MAHA ALSAYASNEH
Sent: Tuesday, September 19, 2017 10:25:15 AM
To: users@kafka.apache.org
Subject: Qu
What is the retention time on the topic you are publishing to?
From: MAHA ALSAYASNEH
Sent: Tuesday, September 19, 2017 10:25:15 AM
To: users@kafka.apache.org
Subject: Question about Kafka
Hello,
I'm using Kafka 0.10.1.1
I set up my cluster Kafka + zookeep
Hello,
I'm using Kafka 0.10.1.1
I set up my cluster Kafka + zookeeper on three nodes (three brokers, one topic,
6 partitions, 3 replicas)
When I send messages using Kafka producer (independent node), sometimes I get
this error and I couldn't figure out how to solve it.
" org.apache.kafka.c
output, but it should get fresh
> offsets (with `--describe` for example), since the old offsets were
> removed once it became inactive.
>
> --Vahid
>
>
>
>
> From: Subhash Sriram
> To: users@kafka.apache.org
> Date: 05/05/2017 02:38 PM
> Subject: Re
:Re: Kafka 0.10.1.0 - Question about
kafka-consumer-groups.sh
Hi Vahid,
Thank you very much for your reply! I appreciate the clarification.
Unfortunately, I didn't really try the command until today. That being
said, I created a couple of new groups and consumed from a test topic
today, and
h Sriram
> To: users@kafka.apache.org
> Date: 05/05/2017 01:43 PM
> Subject:Kafka 0.10.1.0 - Question about kafka-consumer-groups.sh
>
>
>
> Hey everyone,
>
> I am a little bit confused about how the kafka-consumer-groups.sh/
> ConsumerGroupCommand wor
Kafka 0.10.1.0 - Question about kafka-consumer-groups.sh
Hey everyone,
I am a little bit confused about how the kafka-consumer-groups.sh/
ConsumerGroupCommand works, and was hoping someone could shed some light
on
this for me.
We are running Kafka 0.10.1.0, and using the new Consumer AP
Hey everyone,
I am a little bit confused about how the kafka-consumer-groups.sh/
ConsumerGroupCommand works, and was hoping someone could shed some light on
this for me.
We are running Kafka 0.10.1.0, and using the new Consumer API with the
Confluent.Kafka C# library (v0.9.5) that uses librdkafka
Hi Karthik,
I think in the current trunk we do effectively load balance across
processes (they are named as "clients" in the partition assignor) already.
More specifically:
1. Consumer clients embedded a "client UUID" in its subscription so that
the leader can group them into a single client, who
Hi,
I guess, it's currently not possible to load balance between different
machines. It might be a nice optimization to add into Streams though.
Right now, you should reduce the number of threads. Load balancing is
based on threads, and thus, if Streams place tasks to all threads of one
machine,
Hey,
I have a typical scenario of a kafka-streams application in a production
environment.
We have a kafka-cluster with multiple topics. Messages from one topic is being
consumed by a the kafka-streams application. The topic, currently, has 9
partitions. We have configured consumer thread coun
Hi Vishnu,
Assuming that the POS application will generate POS events, we could
accumulate these events in Kafka and use it as a data feed for live
dashboards. We could do some common events stream processing within
Kafka itself using the streams API. Alternatively the POS events stream
could
I read about kafka, and still i am thinking about some scenarios, what is
the possibilities to use kafka in a cloud POS application? is there any
sense to do a live dashboard?
Thanks
Amtest
Hi,I have a question。When using connect-distributed, I start some
connectors those push the data to ES from kafka. But there are many errors
in the log,why are these errors and how to solve the problem?
Thinks!
[2017-01-09 16:24:30,645] INFO Sink task WorkerSinkTask{id=es4kafa1112-0}
apache.org; Radoslaw Gruchalski
Subject: RE: A question about kafka
Hi,
Thanks for your reply!
OK, I got it. And, there is a parameter named compression.type in
config/producer.properties, which is same usage as "--compression-codec " I
think. I modify compression.type in config/produce
apache.org; Radoslaw Gruchalski
Subject: RE: A question about kafka
Hi,
Thanks for your reply!
OK, I got it. And, there is a parameter named compression.type in
config/producer.properties, which is same usage as "--compression-codec " I
think. I modify compression.type in config/produce
apache.org; Radoslaw Gruchalski
Subject: RE: A question about kafka
Hi,
Thanks for your reply!
OK, I got it. And, there is a parameter named compression.type in
config/producer.properties, which is same usage as "--compression-codec " I
think. I modify compression.type in config/produce
i hava test my kafka programe using kafka-console-producer.sh and
kafka-console-consumer.sh.
i hava test my kafka programe using kafka-console-producer.sh and
kafka-console-consumer.sh.
setp1 :
[oracle@bd02 bin]$ ./kafka-console-producer.sh --broker-list
133.224.217.175:9092 --topic oggtopic
setp2:
[oracle@bd02 bin]$ ./kafka-console-consumer.sh --zookeeper
133.224.217.
oducer.config config/producer.properties
--broker-list localhost:9092 --topic test
Best Regards
Johnny
-Original Message-
From: Hans Jespersen [mailto:h...@confluent.io]
Sent: 2016年10月17日 14:29
To: users@kafka.apache.org; Radoslaw Gruchalski
Subject: RE: A question about kafka
Because t
Because the producer-property option is used to set other properties that are
not compression type.
//h...@confluent.io
Original message From: ZHU Hua B
Date: 10/16/16 11:20 PM (GMT-08:00) To:
Radoslaw Gruchalski , users@kafka.apache.org Subject: RE:
A question about kafka
:03
To: users@kafka.apache.org
Subject: Re: A question about kafka
This is known issue in some of the command line tools.
JIRA is here :
https://issues.apache.org/jira/browse/KAFKA-2619
On Mon, Oct 17, 2016 at 11:16 AM, ZHU Hua B
wrote:
> Hi,
>
>
> Anybody could help to answer t
custom configuration for a user-
defined message reader.
Best Regards
Johnny
From: Radoslaw Gruchalski [mailto:ra...@gruchalski.com]
Sent: 2016年10月17日 14:02
To: ZHU Hua B; users@kafka.apache.org
Subject: RE: A question about kafka
Hi,
I believe the answ
ards
>
> Johnny
>
> -Original Message-
> From: ZHU Hua B
> Sent: 2016年10月14日 16:41
> To: users@kafka.apache.org
> Subject: [COMMERCIAL] A question about kafka
>
> Hi,
>
>
> I have a question about kafka, could you please help to have a look?
>
> I want
] A question about kafka
Hi,
I have a question about kafka, could you please help to have a look?
I want to send a message from producer with snappy compression codec. So I
run the command "bin/kafka-console-producer.sh --compression-codec snappy
--broker-list localhost:9092 --topic test&qu
Hi,
Anybody could help to answer this question? Thanks!
Best Regards
Johnny
-Original Message-
From: ZHU Hua B
Sent: 2016年10月14日 16:41
To: users@kafka.apache.org
Subject: [COMMERCIAL] A question about kafka
Hi,
I have a question about kafka, could you please help to have a
Hi,
I have a question about kafka, could you please help to have a look?
I want to send a message from producer with snappy compression codec. So I run
the command "bin/kafka-console-producer.sh --compression-codec snappy
--broker-list localhost:9092 --topic test", after that I c
The most common use case for Kafka is within a data center, but you can
absolutely produce data across the WAN. You may need to adjust some
settings (e.g. timeouts, max in flight requests per connection if you want
high throughput) to account for operating over the WAN, but you can
definitely do it
Hello Guys.
We are going to install Apache Kafka in our local data center and different
producers which are distributed across different locations will be connected
to this server.
Our Producers will use Internet connection and also will send 10mg data
packages every 30 seconds.
I was won
og to Kafka's producer?
>
>Thanks
>Liang
>
>-Original Message-
>From: Jiangjie Qin [mailto:j...@linkedin.com.INVALID]
>Sent: Monday, April 06, 2015 11:46 AM
>To: users@kafka.apache.org
>Subject: Re: question about Kafka
>
>Hey Liang,
>
>
users@kafka.apache.org
Subject: Re: question about Kafka
Hey Liang,
Have you looked at the quick start here:
https://kafka.apache.org/documentation.html#quickstart
In Kafka, on the producer side, there is no concept of ³commit². If you are
producing using KafkaProducer, you can do a send.get(), this
Also if you are using Kafka from the latest trunk, KafkaProducer has a
flush() interface that you may call. This will ensure all the message
previously sent from send() methods are sent to Kafka server.
On 4/3/15, 3:38 PM, "Sun, Joey" wrote:
>Hello, group
>
>I am a newbie to Kafka. I am research
Hey Liang,
Have you looked at the quick start here:
https://kafka.apache.org/documentation.html#quickstart
In Kafka, on the producer side, there is no concept of ³commit². If you
are producing using KafkaProducer, you can do a send.get(), this is a
synchronized send so if no exception was thrown,
Hello, group
I am a newbie to Kafka. I am researching on how to commit a new appended log
message (e.g. apache access log) to Kafka. Could you please share some
ideas/solutions?
Thanks
Liang
Hi all,
I have a question about kafka,
Kafka producer speed is different from internet. And the capacity is
different with different versions.
I test this through kafka-producer-perf-test.sh.
1. Could you let me know the general send speed through thread: 1
batch:1, and the
Hi, I’m having trouble understanding the results from running
kafka-consumer-perf-test. For low number of messages, I see very low
throughput in terms of messages / second. Here is a table of results:
fetch.size
data.consumed.in.MB
MB.sec
data.consumed.in.nMsg
nMsg.sec
1048576
0.0003
0
1
0.014
> My question is, does the number of kafka consumers mean the number of
> kafka streams?
>
Yes. To know the total number of consumers/streams in a group, you need to
add up the number of streams on every consumer instance
> For example, I have one broker with one partition. What if I create
> th
Hi
I know if the number of kafka consumers is greater than the number of
partitions in the kafka broker cluster, several kafka consumers will
be idle.
My question is, does the number of kafka consumers mean the number of
kafka streams?
For example, I have one broker with one partition. What if I
66 matches
Mail list logo