Thanks Pavel for prompt response. I could confirm that node "5423e6b5-c9be-4eb8-8f68-e643357ec2b3" (and no other node in the cluster) did not go down, not sure how did stale data cropped up on few nodes. And this type of exception is coming from every server node in the cluster.
What happens if re-balancing did not happen properly due to this exception, could it lead to data loss ? does data get corrupted on the part*.bin files (in persistent store) in the Ignite cache due to this exception ? Thanks, -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/