[ 
https://issues.apache.org/jira/browse/KAFKA-10793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241987#comment-17241987
 ] 

A. Sophie Blee-Goldman commented on KAFKA-10793:
------------------------------------------------

So that's our best guess. We should discuss the solution on the PR, but I'll 
lay out the two possibilities I see here in case anyone has any better ideas.

1) synchronize joinGroupIfNeeded()
2) clear the _findCoordinatorFuture_ when handling the result, rather than in 
the listener callbacks. For example in the main loop of ensureCoordinatorReady()

Personally I think option 2 is better, since it makes a lot more sense to me to 
begin with. By clearing the future in the listener callbacks, we might clear it 
before we ever even get to check on the result, eg the exception if failed. We 
actually seem to already anticipate this particular problem and recently 
implemented a workaround by adding an extra listener which saves the exception 
to a class _findCoordinatorException_ field. If we just wait to clear the 
future then presumably we could remove this workaround as well, and just save 
the exception when we check "if (future.failed())" inside of 
lookupCoordinator().

All that said, I'm not intimately familiar with the technical details of the 
ConsumerNetworkClient and how it handles its RequestFutures, so it's possible 
I'm missing something important.

> Race condition in FindCoordinatorFuture permanently severs connection to 
> group coordinator
> ------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-10793
>                 URL: https://issues.apache.org/jira/browse/KAFKA-10793
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer, streams
>    Affects Versions: 2.5.0
>            Reporter: A. Sophie Blee-Goldman
>            Priority: Critical
>
> Pretty much as soon as we started actively monitoring the 
> _last-rebalance-seconds-ago_ metric in our Kafka Streams test environment, we 
> started seeing something weird. Every so often one of the StreamThreads (ie a 
> single Consumer instance) would appear to permanently fall out of the group, 
> as evidenced by a monotonically increasing _last-rebalance-seconds-ago._ We 
> inject artificial network failures every few hours at most, so the group 
> rebalances quite often. But the one consumer never rejoins, with no other 
> symptoms (besides a slight drop in throughput since the remaining threads had 
> to take over this member's work). We're confident that the problem exists in 
> the client layer, since the logs confirmed that the unhealthy consumer was 
> still calling poll. It was also calling Consumer#committed in its main poll 
> loop, which was consistently failing with a TimeoutException.
> When I attached a remote debugger to an instance experiencing this issue, the 
> network client's connection to the group coordinator (the one that uses 
> MAX_VALUE - node.id as the coordinator id) was in the DISCONNECTED state. But 
> for some reason it never tried to re-establish this connection, although it 
> did successfully connect to that same broker through the "normal" connection 
> (ie the one that juts uses node.id).
> The tl;dr is that the AbstractCoordinator's FindCoordinatorRequest has failed 
> (presumably due to a disconnect), but the _findCoordinatorFuture_ is non-null 
> so a new request is never sent. This shouldn't be possible since the 
> FindCoordinatorResponseHandler is supposed to clear the 
> _findCoordinatorFuture_ when the future is completed. But somehow that didn't 
> happen, so the consumer continues to assume there's still a FindCoordinator 
> request in flight and never even notices that it's dropped out of the group.
> These are the only confirmed findings so far, however we have some guesses 
> which I'll leave in the comments. Note that we only noticed this due to the 
> newly added _last-rebalance-seconds-ago_ __metric, and there's no reason to 
> believe this bug hasn't been flying under the radar since the Consumer's 
> inception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to