Hello Xiaochi ,
I am not sure if I have understood the problem correctly but beware the
fact that only old log segments and not the current log segment are
taken into account for deletions. So if you want the data to be deleted
in a timely manner, you also need to configure a tighter interval
re-checking to see if there is any suggestion on this issue.
On Wed, Feb 2, 2022 at 3:36 PM karan alang wrote:
> Hello All,
>
> I'm trying to run a Structured Streaming program on GCP Dataproc, which
> accesses the data from Kafka and prints it.
>
> Access to Kafka is using SSL, and the
Hi All,
does kafka have any metrics for count of number of producers and consumers
connected to a kafka cluster and any given time ?
Thanks,
Dhirendra.
Hello All,
I'm trying to run a Structured Streaming program on GCP Dataproc, which
accesses the data from Kafka and prints it.
Access to Kafka is using SSL, and the truststore and keystore files are
stored in buckets. I'm using Google Storage API to access the bucket, and
store the file in the
Yes thank you Isreal. rmoff also clarified things for me over on reddit.
On Wed, Jan 26, 2022 at 6:41 PM Israel Ekpo wrote:
> Kafka 3.x is production ready if you are running it with Zookeeper
>
> If you are running it in KRaft mode without Zookeeper that set up is not
> yet recommended for
Hello,
I am currently using kafka 3.1.0 with java 1.8. I have set kafka log
retention policy in the server.properties like this:
log.retention.hours=6
log.retention.bytes=5368709120
log.segment.bytes=1073741824
log.retention.check.interval.ms=30
log.cleanup.policy=delete
However, it
Hi All,
We are facing an issue with our kafka streams application due to uneven
task allocation. There are 100 partitions in the input topic with 100
stream threads processing the data. Everything works well when each task
gets assigned with 1 partition. But when more than one partition is
You are confusing the log4j api with the log4j jars.
Yes. To your question. You can take log4j api and log4j core 2.17.1 and
applications still believe they are using log4j1...
On Tuesday, February 1, 2022, Steve Souza wrote:
> Where I work we use both Tableau and Microstrategy which in turn
Hi Bruno,
Tried there as well
I’ve also looked at using Kafka Streams directly, but can’t find any good
examples.
Mvh,
Robin
From: Bruno Cadonna
Date: Wednesday, 2 February 2022 at 11:31
To: users@kafka.apache.org
Subject: Re: Reducing issue
[You don't often get email from
Hi Robin,
since this seems to be a ksql question, you will more likely get an
answer here:
https://forum.confluent.io/c/ksqldb
Best,
Bruno
On 01.02.22 10:03, Robin Helgelin wrote:
Hi,
Working on a small MVP and keep running into a dead end when it comes to
reducing data.
Began using
10 matches
Mail list logo