I think this is an important feature to have soon;

My understanding of it:

We default with the feature off, and newly discovered nodes are
added/removed as usual. With a JMX operatable switch, one can disable
this:

If a remote node is joining the JGroups view, but rehash is off: it
will be added to a to-be-installed view, but this won't be installed
until rehash is enabled again. This gives time to add more changes
before starting the rehash, and would help a lot to start larger
clusters.

If the [self] node is booting and joining a cluster with manual rehash
off, the start process and any getCache() invocation should block and
wait for it to be enabled. This would need of course to override the
usually low timeouts.

When a node is suspected it's a bit a different story as we need to
make sure no data is lost. The principle is the same, but maybe we
should have two flags: one which is a "soft request" to avoid rehashes
of less than N members (and refuse N>=numOwners ?), one which is just
disable it and don't care: data might be in a cachestore, data might
not be important. Which reminds me, we should consider as well a JMX
command to flush the container to the CacheLoader.

--Sanne
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to