Neither getAll not putAll are really designed for many thousands of 
reads/writes in a single operation. The whole operation (rather than individual 
rows) are atomic.

For writes, you can use CacheAffinity to work out which node each record will 
be stored on and bulk push updates to the specific node (as the blog I posted 
previously suggests).

For reads, Iā€™d expect to see scan and SQL queries. Or ā€” better ā€” using 
colocated compute to avoid copying the data over the network.

Ignite does scale horizontally but you have to use the right approach to get 
the best performance.

> On 29 Apr 2021, at 08:55, barttanghe <btan...@omp.com> wrote:
> 
> Stephen,
> 
> I was in the understanding that if the cache is atomic (which is one of the
> cases jjimeno tried), that there are no transactions involved and that the
> putAll in fact is working on a row by row basis (some can fail).
> 
> So I don't understand what you mean with 'you're effectively creating a huge
> transaction'.
> Is this something internal to Ignite (so not user-space ?)?
> Could you help me in understanding this?
> 
> Next to that, what you explain is about putAll (writing case), but the
> getAll (reading case) also seems to already have reached its limits at only
> 16 nodes. Any ideas on that one?
> 
> Thanks!
> 
> Bart
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Reply via email to