[ 
https://issues.apache.org/jira/browse/FLINK-34007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808872#comment-17808872
 ] 

Matthias Pohl commented on FLINK-34007:
---------------------------------------

Increasing the thread size was necessary because we executed the run command in 
[KubernetesLeaderElector#run|https://github.com/apache/flink/blob/c5808b04fdce9ca0b705b6cc7a64666ab6426875/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/resources/KubernetesLeaderElector.java#L103]
 in the same thread pool that is used in the fabric8io LeaderElector for 
running the CompetableFutures in the loop:

The {{LeaderElector#run}} call waits for the CompletableFuture returned by 
{{LeaderElector#acquire}} to complete (see 
[LeaderElector:70|https://github.com/fabric8io/kubernetes-client/blob/0f6c696509935a6a86fdb4620caea023d8e680f1/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/extended/leaderelection/LeaderElector.java#L70]).
 {{LeaderElector#acquire}} will trigger an asynchronous call on the 
{{executorService}} which wouldn't pick up the task because the single thread 
is waiting for the acquire call to complete. This deadlock situation is 
reproduced by Flink's {{{}KubernetesLeaderElectionITCase{}}}. I would imagine 
that this is the timeout, [~gyfora] was experiencing when upgrading to v6.6.2.

The 3 threads that were introduced by FLINK-31997 shouldn't have caused any 
issues because the LeaderElector only writes to the ConfigMap in 
[LeaderElector#tryAcquireOrRenew|https://github.com/fabric8io/kubernetes-client/blob/0f6c696509935a6a86fdb4620caea023d8e680f1/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/extended/leaderelection/LeaderElector.java#L210]
 and 
[LeaderElector#release|https://github.com/fabric8io/kubernetes-client/blob/0f6c696509935a6a86fdb4620caea023d8e680f1/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/extended/leaderelection/LeaderElector.java#L147].
 Both methods are synchronized. Hence, they shouldn't cause a race condition in 
any way as far as I can see.

I created a [draft PR with a fix|https://github.com/apache/flink/pull/24132] 
that recreates the fabric8io {{LeaderElector}} instead of reusing it. The fix 
also covers reverts the thread pool size from 3 to 1 with 
{{KubernetesLeaderElectionITCase}} passing again. I still have to think of a 
way to test Flink's KubernetesLeaderElector on the issue that caused the 
failure of this Jira. The only way I could think of is doing something similar 
to what's done in the [fabric8io 
codebase|https://github.com/fabric8io/kubernetes-client/blob/0f6c696509935a6a86fdb4620caea023d8e680f1/kubernetes-client-api/src/test/java/io/fabric8/kubernetes/client/extended/leaderelection/LeaderElectorTest.java#L63]
 with Mockito. Any other ideas are appreciated. I would like to avoid Mockito.

> Flink Job stuck in suspend state after losing leadership in HA Mode
> -------------------------------------------------------------------
>
>                 Key: FLINK-34007
>                 URL: https://issues.apache.org/jira/browse/FLINK-34007
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Coordination
>    Affects Versions: 1.19.0, 1.18.1, 1.18.2
>            Reporter: Zhenqiu Huang
>            Priority: Blocker
>              Labels: pull-request-available
>         Attachments: Debug.log, LeaderElector-Debug.json, job-manager.log
>
>
> The observation is that Job manager goes to suspend state with a failed 
> container not able to register itself to resource manager after timeout.
> JM Log, see attached
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to