Can flink kafka producer dynamically discover new parititons after expansion??
hi??I have a source Kafka and a sink Kafka??when the amount of data processing grows??I need to expand Kafka topic's partition number ,but I don't want to restart the job to take effect. for source Kafka, I use flink.partition-discovery.interval-millis and it could consume the new parititon after I expand the Kafka topic's partition number. but sink kafka don't work like this?? The flink kafka producer get topic's paritition list and cache in topicPartitionsMap as showed In Class FlinkKafkaProducer
Can flink kafka producer dynamically discover new parititons after expansion??
hi??I have a source Kafka and a sink Kafka??when the amount of data processing grows??I need to expand Kafka topic's partition number ,but I don't want to restart the job to take effect. for source Kafka, I use flink.partition-discovery.interval-millis and it could consume the new parititon after I expand the Kafka topic's partition number. but sink kafka don't work like this?? The flink kafka producer get topic's paritition list and cache in topicPartitionsMap as showed In Class FlinkKafkaProducer