I am using a user centric modeling approach where most of the computations
would be on a per-user basis (joins) before aggregation. The idea is to put
the data (across different tables/caches) for the same user in the same
partition/server. That's the reason why I chose user-id as the affinity key.
Neither getAll not putAll are really designed for many thousands of
reads/writes in a single operation. The whole operation (rather than individual
rows) are atomic.
For writes, you can use CacheAffinity to work out which node each record will
be stored on and bulk push updates to the specific
Stephen,
I was in the understanding that if the cache is atomic (which is one of the
cases jjimeno tried), that there are no transactions involved and that the
putAll in fact is working on a row by row basis (some can fail).
So I don't understand what you mean with 'you're effectively creating a
Hi,
Unfortunately I don't understand the root of the problem totally.
Looks like the performance depends linear on rows count:
10k ~ 0.1s
65k ~ 0.65s
650k ~ 7s
I see the linear dependency on rows count...
> Is Ignite doing the join and filtering at each data node and then
sending
> the 650K to