*Issue:* Countdown latched gets reinitialize to original value(4) when one
or more (but not all) node goes down. *(Partition loss happened)*

We are using ignite's distributed countdownlatch to make sure that cache
loading is completed on all server nodes. We do this to make sure that our
kafka consumers starts only after cache loading is complete on all server
nodes. This is the basic criteria which needs to be fulfilled before starts
actual processing


 We have 4 server nodes and countdownlatch is initialized to 4. We use
cache.loadCache method to start the cache loading. When each server
completes cache loading it reduces the count by 1 using countDown method.
So when all the nodes completes cache loading, the count reaches to zero.
When this count  reaches to zero we start kafka consumers on all server
nodes.

 But we saw weird behavior in prod env. The 3 server nodes were shut down
at the same time. But 1 node is still alive. When this happened the count
down was reinitialized to original value i.e. 4. But I am not able to
reproduce this in dev env.

 Is this a bug, when one or more (but not all) nodes goes down then count
re initializes back to original value?

Thanks,
Akash

Reply via email to