This sounds strange. There definetely should be a cause of such behaviour.
Rebalancing is happened only after an topology change (node join/leave,
deactivation/activation).
Could you please share logs from node with exception you mentioned in
message, node with id
Thanks Pavel for prompt response.
I could confirm that node "5423e6b5-c9be-4eb8-8f68-e643357ec2b3" (and no
other node in the cluster) did not go down, not sure how did stale data
cropped up on few nodes. And this type of exception is coming from every
server node in the cluster.
What happens if
R
> [sys-#22846%a738c793-6e94-48cc-b6cf-d53ccab5f0fe%] {}
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplier
> - Failed to send partition supply message to node:
> 5423e6b5-c9be-4eb8-8f68-e643357ec2b3 class
> org.apache.ignite.IgniteCheckedException: Coul
:
5423e6b5-c9be-4eb8-8f68-e643357ec2b3 class
org.apache.ignite.IgniteCheckedException: Could not find start pointer for
partition [part=9, partCntrSince=484857]
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.historicalIterator(GridCacheOffheapManager.java:792