[ https://issues.apache.org/jira/browse/CURATOR-644?focusedWorklogId=804387&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-804387 ]
ASF GitHub Bot logged work on CURATOR-644: ------------------------------------------ Author: ASF GitHub Bot Created on: 29/Aug/22 13:52 Start Date: 29/Aug/22 13:52 Worklog Time Spent: 10m Work Description: XComp commented on PR #430: URL: https://github.com/apache/curator/pull/430#issuecomment-1230339944 > they're logically resolved simultaneously. That is, if you resolve CURATOR-644, you resolve CURATOR-645 - they're the same sort. > > In another word, you can check out the diff and tell me how to split it up into two PRs. I was thinking about it once more. CURATOR-645 could be covered separately in my opinion. CURATOR-645 was identified in FLINK-27078 where we run almost no logic before revoking the leadership by calling `LeaderLatch#close`. That caused the current leader's `LeaderLatch` instance to trigger its child node deletion while other `LeaderLatch` instances were right within setting up the watcher for its child node's predecessor. Hence, I see CURATOR-645 being not that tightly related with the reconnect issue covered in CURATOR-644. CURATOR-645 just needs to be resolved before CURATOR-644 can be resolved. Anyway, the changes are not that big in the end that we couldn't resolve both in the same PR. ¯\_(ツ)_/¯ Issue Time Tracking ------------------- Worklog Id: (was: 804387) Time Spent: 1h 40m (was: 1.5h) > CLONE - Race conditions in LeaderLatch after reconnecting to ensemble > --------------------------------------------------------------------- > > Key: CURATOR-644 > URL: https://issues.apache.org/jira/browse/CURATOR-644 > Project: Apache Curator > Issue Type: Bug > Affects Versions: 4.2.0 > Reporter: Ken Huang > Assignee: Jordan Zimmerman > Priority: Minor > Time Spent: 1h 40m > Remaining Estimate: 0h > > Clone from CURATOR-504. > We use LeaderLatch in a lot of places in our system and when ZooKeeper > ensemble is unstable and clients are reconnecting to logs are full of > messages like the following: > {{{}[2017-08-31 > 19:18:34,562][ERROR][org.apache.curator.framework.recipes.leader.LeaderLatch] > Can't find our node. Resetting. Index: -1 {{}}}} > According to the > [implementation|https://github.com/apache/curator/blob/4251fe328908e5fca37af034fabc190aa452c73f/curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java#L529-L536], > this can happen in two cases: > * When internal state `ourPath` is null > * When the list of latches does not have the expected one. > I believe we hit the first condition because of races that occur after client > reconnects to ZooKeeper. > * Client reconnects to ZooKeeper and LeaderLatch gets the event and calls > reset method which set the internal state (`ourPath`) to null, removes old > latch and creates a new one. This happens in thread > "Curator-ConnectionStateManager-0". > * Almost simultaneously, LeaderLatch gets another even NodeDeleted > ([here|https://github.com/apache/curator/blob/4251fe328908e5fca37af034fabc190aa452c73f/curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java#L543-L554]) > and tries to re-read the list of latches and check leadership. This happens > in the thread "main-EventThread". > Therefore, sometimes there is a situation when method `checkLeadership` is > called when `ourPath` is null. -- This message was sent by Atlassian Jira (v8.20.10#820010)