Hello!

It was disabled not just because of potential data loss but because the
cache was resurrected on such start and could break cluster.

Creating cache per operation and destroying afterwards is an anti-pattern,
it can cause all sorts of issues and is better avoided.

Regards,
-- 
Ilya Kasnacheev


чт, 4 июн. 2020 г. в 01:26, xero <mpro...@gmail.com>:

> Hi,
> I tried your suggestion of using a NodeFilter but, is not solving this
> issue. Using a NodeFilter by consistent-id in order to create the cache in
> only one node is creating persistence information in every node:
>
> In the node for which the filter is true (directory size 75MB):
>
> //work/db/node01-0da087c4-c11a-47ce-ad53-0380f0d2c51a//cache-tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c/
>
> In the node for which the filter is false  (directory size 8k):
>
> //work/db/node01-0da087c4-c11a-47ce-ad53-0380f0d2c51a//cache-tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c/
>
> If the cache is destroyed while *any* of these nodes is down, it won't join
> the cluster again throwing the exception:
> /Caused by: class org.apache.ignite.spi.IgniteSpiException: Joining node
> has
> caches with data which are not presented on cluster, it could mean that
> they
> were already destroyed, to add the node to cluster - remove directories
> with
> the caches[tempBCK0-cd982aa5-c27f-4582-8a3b-b34c5c60a49c]/
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to