Hi Kirti,
Kafka Sink supports sending messages with headers.
You should implement a HeaderProvider to extract headers from input element.
KafkaSink sink = KafkaSink.builder()
.setBootstrapServers(brokers)
.setRecordSerializer(KafkaRecordSerializationSchema.builder()
Hi Mates,
I have below queries regarding Flink Kafka Sink.
1. Does Kafka Sink support schema registry? If yes, is there any
documentations to configure the same?
2. Does Kafka Sink support sending messages (ProducerRecord) with headers?
Regards,
Kirti Dhar
Hi,
I have a flink job that consumes from kafka and sinks it to an API. I need
to ensure that my flink job can send within the rate limit 200 tps, we are
planning to increase the parallelism, but I do not know the right number to
set. 1 parallelism does equal to 1 consumer? So if 200, should we s
Hi,
Why do i receiving an acknowledged messages from PulsarSource?
When i startup the below program (*1), I receiving same messages from
PulsarSource every time.
On the other hand, programs using PulsarClient will never receive acknowledged
messages.(*2)
How can i receive only unacknowledged m
When it comes to decoupling the state store from Flink, I suggest taking a
look at FlinkNDB, which is an experimental state backend for Flink that
puts the state into an external distributed database. There's a Flink
Forward talk [1] and a master's thesis [2] available.
[1] https://www.youtube.com
Hello Flink Community,
In our Flink SQL job we are experiencing undesirable behavior that is
related to events reordering (more below in background section)
I have a few questions related to sink upsert materializer, the answer to
them should help me understand its capabilities:
1. Does the materi
Thanks Zakelly and Junrui.
I was actually exploring RocksDB as a state backend and I thought maybe Redis
could offer more features as a state backend. For e.g. maybe state sharing
between operators, geo-red of state, partitioning etc. I understand these are
not native use cases for Flink, but m