As a follow up to this we have produced tooling which allows us to detect
and correct the problem. We are not entirely comfortable running control.sh
on production nodes (because, well, it's production :) ).
We have observed dozens of cases of this kind of corruption on two separate
Ignite grid
Hello, Raymond.
Usually, experimental is feature that can be changed in future.
This statement relates to the public API of the feature usually.
> Does this imply risk if run against a production environment grid?
It depends.
As for read repair, CHECK_ONLY is read only mode and can’t harm your
Thanks for the pointer to the read repair facility added in Ignite 2.14.
Unfortunately the .WithReadRepair() extension does not seem to be present
in the Ignite C# client.
This means we either need to use the experimental Command.sh support, or
improve our tooling to effectively do the same. I
Hello.
I don’t know the cause of your issue.
But, we have feature to overcome it [1]
Consistency repair can be run from control.sh.
```
./bin/control.sh --enable-experimental
...
[EXPERIMENTAL]
Check/Repair cache consistency using Read Repair approach:
control.(sh|bat) --consistency
[Replying onto correct thread]
As a follow up to this email, we are starting to collect evidence that
replicated caches within our Ignite grid are failing to replicate values in
a small number of cases.
In the cases we observe so far, with a cluster of 4 nodes participating in
a replicated
Hi,
I have a query regarding data safety of replicated caches in the case of
hard failure of the compute resource but where the storage resource is
available when the node returns.
We are using Ignite 2.15 with the C# client.
We have a number of these caches that have four nodes participating