Hello everyone,
I have a use-case where I need to have a Flink application produce to a 
variable number of Kafka topics (specified through configuration), potentially 
in different clusters, without having to redeploy the app. Let's assume I 
maintain the set of destination clusters/topics in config files, and have code 
in my Flink app to detect and reload any changes in these config files at 
runtime.
I have two questions:   
   - Is that a sound/reasonable thing to do? Or is it going to be riddled with 
issues?   
   

   - To implement that, should I write a custom SinkFunction that maintains a 
set of Kafka producers? Or a custom SinkFunction that delegates the work to a 
collection of FlinkKafkaProducer instances? Is there a better approach?
Thanks in advance.
Truly,Ahmed

Reply via email to