[ 
https://issues.apache.org/jira/browse/CURATOR-644?focusedWorklogId=807738&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-807738
 ]

ASF GitHub Bot logged work on CURATOR-644:
------------------------------------------

                Author: ASF GitHub Bot
            Created on: 11/Sep/22 16:17
            Start Date: 11/Sep/22 16:17
    Worklog Time Spent: 10m 
      Work Description: tisonkun commented on code in PR #430:
URL: https://github.com/apache/curator/pull/430#discussion_r967853023


##########
curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java:
##########
@@ -667,9 +670,9 @@ protected void handleStateChange(ConnectionState newState)
             {
                 try
                 {
-                    if ( 
client.getConnectionStateErrorPolicy().isErrorState(ConnectionState.SUSPENDED) 
|| !hasLeadership.get() )

Review Comment:
   If you take a look at FLINK-10052, the final solution is using a 
SessionErrorPolicy that will skip this `if` block. Since a ConnectionLoss may 
be only network unstable instead of the node lost its leadership (the ephemeral 
node). Before this patch it's `reset` to be called and actively give up the 
leadership, it will cause reelection, increase ZK workload and cause further 
inconsistency.
   
   The thorough solution should be something like I proposed in 
https://github.com/apache/flink/pull/9878, but I failed to contribute it to the 
upstream (FLINK-10052 takes more than 2 years to be merged. It's not a good 
experience to me, lol). We run with the solution in Tencent for years and it 
works well :)





Issue Time Tracking
-------------------

    Worklog Id:     (was: 807738)
    Time Spent: 3.5h  (was: 3h 20m)

> CLONE - Race conditions in LeaderLatch after reconnecting to ensemble
> ---------------------------------------------------------------------
>
>                 Key: CURATOR-644
>                 URL: https://issues.apache.org/jira/browse/CURATOR-644
>             Project: Apache Curator
>          Issue Type: Bug
>    Affects Versions: 4.2.0
>            Reporter: Ken Huang
>            Assignee: Jordan Zimmerman
>            Priority: Minor
>          Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Clone from CURATOR-504.
> We use LeaderLatch in a lot of places in our system and when ZooKeeper 
> ensemble is unstable and clients are reconnecting to logs are full of 
> messages like the following:
> {{{}[2017-08-31 
> 19:18:34,562][ERROR][org.apache.curator.framework.recipes.leader.LeaderLatch] 
> Can't find our node. Resetting. Index: -1 {{}}}}
> According to the 
> [implementation|https://github.com/apache/curator/blob/4251fe328908e5fca37af034fabc190aa452c73f/curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java#L529-L536],
>  this can happen in two cases:
>  * When internal state `ourPath` is null
>  * When the list of latches does not have the expected one.
> I believe we hit the first condition because of races that occur after client 
> reconnects to ZooKeeper.
>  * Client reconnects to ZooKeeper and LeaderLatch gets the event and calls 
> reset method which set the internal state (`ourPath`) to null, removes old 
> latch and creates a new one. This happens in thread 
> "Curator-ConnectionStateManager-0".
>  * Almost simultaneously, LeaderLatch gets another even NodeDeleted 
> ([here|https://github.com/apache/curator/blob/4251fe328908e5fca37af034fabc190aa452c73f/curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java#L543-L554])
>  and tries to re-read the list of latches and check leadership. This happens 
> in the thread "main-EventThread".
> Therefore, sometimes there is a situation when method `checkLeadership` is 
> called when `ourPath` is null.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to