[
https://issues.apache.org/jira/browse/CURATOR-644?focusedWorklogId=790235&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-790235
]
ASF GitHub Bot logged work on CURATOR-644:
------------------------------------------
Author: ASF GitHub Bot
Created on: 13/Jul/22 02:37
Start Date: 13/Jul/22 02:37
Worklog Time Spent: 10m
Work Description: tisonkun commented on code in PR #430:
URL: https://github.com/apache/curator/pull/430#discussion_r919596776
##########
curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java:
##########
@@ -667,9 +667,9 @@ protected void handleStateChange(ConnectionState newState)
{
try
{
- if (
client.getConnectionStateErrorPolicy().isErrorState(ConnectionState.SUSPENDED)
|| !hasLeadership.get() )
Review Comment:
`getChildren` does nothing if we still hold leadership. And actually even if
`hasLeadership` is true, we can say nothing if we coming back from a connection
error.
Issue Time Tracking
-------------------
Worklog Id: (was: 790235)
Time Spent: 0.5h (was: 20m)
> CLONE - Race conditions in LeaderLatch after reconnecting to ensemble
> ---------------------------------------------------------------------
>
> Key: CURATOR-644
> URL: https://issues.apache.org/jira/browse/CURATOR-644
> Project: Apache Curator
> Issue Type: Bug
> Affects Versions: 4.2.0
> Reporter: Ken Huang
> Assignee: Jordan Zimmerman
> Priority: Minor
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> Clone from CURATOR-504.
> We use LeaderLatch in a lot of places in our system and when ZooKeeper
> ensemble is unstable and clients are reconnecting to logs are full of
> messages like the following:
> {{{}[2017-08-31
> 19:18:34,562][ERROR][org.apache.curator.framework.recipes.leader.LeaderLatch]
> Can't find our node. Resetting. Index: -1 {{}}}}
> According to the
> [implementation|https://github.com/apache/curator/blob/4251fe328908e5fca37af034fabc190aa452c73f/curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java#L529-L536],
> this can happen in two cases:
> * When internal state `ourPath` is null
> * When the list of latches does not have the expected one.
> I believe we hit the first condition because of races that occur after client
> reconnects to ZooKeeper.
> * Client reconnects to ZooKeeper and LeaderLatch gets the event and calls
> reset method which set the internal state (`ourPath`) to null, removes old
> latch and creates a new one. This happens in thread
> "Curator-ConnectionStateManager-0".
> * Almost simultaneously, LeaderLatch gets another even NodeDeleted
> ([here|https://github.com/apache/curator/blob/4251fe328908e5fca37af034fabc190aa452c73f/curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java#L543-L554])
> and tries to re-read the list of latches and check leadership. This happens
> in the thread "main-EventThread".
> Therefore, sometimes there is a situation when method `checkLeadership` is
> called when `ourPath` is null.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)