Hi Can,
acks is a producer setting only, setting it to all on the broker will have
no effect. The default acks for a producer is 1, which means so long as the
partition leader acknowledges the write, it's successful. You have three
replicas, two downed brokers leaves 1 replica (which become the le
Hello,
I'm doing some tests about behaviors of Kafka under faulty circumstances.
Here are my configs(one of total 3, comments removed):
broker.id=0
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://10.0.8.233:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
Hi Matthias,
I have configured the GlobalKTable to stream from a topic and application
is working fine, however during automated build test cases, sometimes I get
an exception: I believe this could be because of race between actual topic
creation and the service startup (since topic creation may n
Hi, I had read the doc and tried to google it but could find relavent
questions/answers. I am curious that is SinkContext thread safe? Say I want to
use the SinkContext to pause partitions in different threads, do I need to lock
on the object?
Hi,
I saw that Apache Avro 1.7.3 introduced a notion of import, and also that
the error I am getting says that it cannot find
"some.package.SourceMetadata", either as a defined name, or as an explicit
definition inline.
However I do want to make it a defined name and define it in a file, and
use it
Hi Ricardo,
IIUC, your response assumes I'm talking about user events (such as ad
clicks or so) that get published on a topic by some external process. That
wasn't what I meant, though. What I want to track is actual Kafka broker
events, such as a consumer group subscribing to a topic.
Basically,
Joris,
I think the best strategy here depends on how fast you want to get
access to the user events. If latency is a thing then just read de data
from the topic along with the other applications. Kafka follows the
/write-once-read-many-times/ pattern which encourage developers to reuse
the da
Hello,
For auditing and tracking purposes, I'd like to be able to monitor user
consumer events like topic subscriptions etc. The idea is to have the
individual events, not some number/second aggregation.
We are using the confluent-docker kafka image, for 5.2.2 (with some bespoke
auth injected), w