Hi,
I have a potential use case where I have a collection of blocks of binary data that I would like to perform parameterised compute operations across. The data is not stored in an RDBMS, but more in a flat file oriented fashion. The blocks of data are spatial in origin (in that they are located within the world, but are essentially just labelled in a special way) so are good candidates for a grid-computing type approach. I have compute logic that understands the information in the blocks of data and can operate on it to produce results. However, the data is too large for the entirety of it to exist in the cache at the same time, which means I can’t do a one-time pre-population of memory then start asking it questions. In reality this is fine as the working set of data is typically much smaller than the overall data set. To achieve my aim in Ignite I think I need to do three things: 1. Apply arbitrary parameterised compute logic (written in C#) against data held in the Ignite cache on each node in the cluster using simple spatial data node-affinity logic and request Ignite perform cluster-wide execution of that logic. 2. Have that compute logic be able to detect if a block of data it is required to operate on is not in the cache and load it at the time the compute operation is proceeding. 3. Protect data in the cache from eviction for the period the compute logic is operating on it. The examples I have been seeing tend to lean towards an SQL style of interacting with the data in Ignite. Are there examples that fall into the uses case I have outlined above Thanks, Raymond.