GitHub user MichaelKoch11 added a comment to the discussion: Define Kafka 
Broker for storage of temporary data between multiple data processors

Thank you for your answer. So it is not yet possible to run a pipeline 
completely on an edge node.
Have I understood correctly that the data processing/reading and sink service 
takes place in the respective extension service? 
The function would therefore have to be named differently in order to 
differentiate between them in the UI or the pipeline configuration could be set 
based on the data source used in the respective extension service.

It will only be possible with the next release to specify the extension service 
for adapters where the microservice is started for reading. For functions 
(Processor) and Sink Service, this does not yet seem to take place.

Maybe I'll have to take a closer look at it myself and customize it for my use 
case with just a pub-sub storage first.


GitHub link: 
https://github.com/apache/streampipes/discussions/2960#discussioncomment-9933948

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to