GitHub user huishougongming edited a comment on the discussion: Distributed 
cluster support

Thank you for your clarification
It's like this, there is currently a Kafka cluster with 20 nodes, and each 
topic has 10 or more partitions. There are three issues
1. How to fill in multiple host names for adapter and sink?
2. When the streams service restarts, it always receives this data from the 
latest location, including the mqtt adapter, which leads to data loss during 
the downtime. How can this be resolved?
3. The data received through the adapter will enter the topic automatically 
created by local Kafka, and then output through sink. What is the logic for 
creating this topic in local Kafka? For example, how to set the number of 
partitions?

GitHub link: 
https://github.com/apache/streampipes/discussions/2912#discussioncomment-9814114

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to