Hi, What you described sounds like an Event Processing architecture, which includes never-ending stream of input data, limited time window, analysis of data within the time window and taking an action if necessary.
Ignite supports Event Processing architecture with the following components: - Data Streamers <https://apacheignite.readme.io/docs/data-streamers> - to stream endless data into Ignite. One of them if the Kafka streamer that you are already familiar with. - Data Expiry Policy <https://apacheignite.readme.io/docs/expiry-policies> - to define a limited time window on the endless stream of data. Thus, answering your question "4" - you do not need to manually remove data. Define the time window using expire policy and Ignite will take care about removing data automatically. - Continuous Query <https://apacheignite.readme.io/docs/continuous-queries>: - Remote Filter <https://apacheignite.readme.io/docs/continuous-queries#section-remote-filter>: analyse events on the server side to decide whether you want to act on them. - Local Listener <https://apacheignite.readme.io/docs/continuous-queries#section-local-listener>: implement your action to be called if an even passes remote filter. Answering your question "3": you have to collocate data. There are two data collocation APIs - using AffinityKey <https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/cache/affinity/AffinityKey.html> as a key type or using @AffinityKeyMapped <https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/cache/affinity/AffinityKeyMapped.html> if you prefer annotation (declarative) style. You want "GPS and Acceleration points that share the same measurementId and deviceId located on the same node". Thus, you could create a type MeasurementKey { deviceId, measurementId } and use that type as an affinity key for both GPS and AccelerationPoint. See examples here <https://apacheignite.readme.io/docs/affinity-collocation#section-collocate-data-with-data> .