Normally when storing data in Ignite using the default
RendezVousAffinityFunction, data is distributed reasonably evenly over the
available nodes. When increasing the cluster size (up to 24 nodes in my
case), I'm finding that the data is not so well distributed. Some nodes have
more than twice as much data as others.

The solution I found is to quadruple the number of cache partitions up to
8192. This reduces the spread significantly - to about 25%.

Is there any down-side to configuring a large number of partitions for a
cache? Specifically:
* Am I right in observing that is slows down partition map exchange? Is this
likely to cause any problems?
* When using cross-cache data affinity, do all participating caches need to
have the same number of partitions configured for this to work?

Is there any rule of thumb for choosing the best configuration? 1000 times
the number of nodes, perhaps?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to