this use case is about using ignite for instant sales analytics. The data
size is less then 1M, but data state changes frequently, on the other hand,
query processor initiates parallel financial aggregations and publish
outputs to dashboard. 
Months ago, we tried partition cache (v1.7, 4 nodes, affinity key), it
showed very good performance for data processing, but if the concurrent
queries were applied at the same time, the whole cluster turned out to be
slow and unsteady. 

To enhance this performance issue, we turned to replicate cache and realize
the dynamic queries through RESTAPI, in this way we gained much better query
performance, but data process became the new bottle neck, the data
processing and response became slow.

So, my question is how to deal with the balance between read and write. 

separate them physically and chain them via some message solution like
ignite streamer or kafka?
what's the best practice of this scenario?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to