>Hi,
Hi, this really looks like very strange
First of all you need to check consistency of your data : [1]
> Some time ago an element (E) was added to this cache (among many others)
And some time it will be all ok there ? Are you sure that this element was
properly touched ?
What king of cache you are talking about ? How data was populated there ? What
API is used to « load element E» on each node ?
If you are talking about restart — i assume that you take a deal with
persistent store, isn`t it ? is it native ignite persistence or some 3-rd party
DB?
Thanks !
[1]
https://ignite.apache.org/docs/latest/tools/control-script#verifying-partition-checksums
>
>We have been triaging an odd issue we encountered in a system using Ignite
>v2.15 and the C# client.
>
>We have a replicated cache across four nodes, lets call them P0, P1, P2 & P3.
>Because the cache is replicated every item added to the cache is present in
>each of P0, P1, P2 and P3.
>
>Some time ago an element (E) was added to this cache (among many others). A
>number of system restarts have occurred since that time.
>
>We started observing an issue where a query running across P0/P1/P2/P3 as a
>cluster compute operation needed to load element E on each of the nodes to
>perform that query. Node P0 succeeded, while nodes P1, P2 & P3 all reported
>that element E did not exist.
>
>This situation persisted until the cluster was restarted, after which the same
>query that had been failing now succeeded as all four 'P' nodes were able to
>read element E.
>
>There were no Ignite errors reported in the context of these failing queries
>to indicate unhappiness in the Ignite nodes.
>
>This seems like very strange behaviour. Are there any suggestions as to what
>could be causing this failure to read the replicated value on the three
>failing nodes, especially as the element 'came back' after a cluster restart?
>
>Thanks,
>Raymond.
>
>
>
> --
>
>Raymond Wilson
>Trimble Distinguished Engineer, Civil Construction Software (CCS)
>11 Birmingham Drive | Christchurch, New Zealand
>[email protected]
>
>