[ 
https://issues.apache.org/jira/browse/KAFKA-9398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17014520#comment-17014520
 ] 

ASF GitHub Bot commented on KAFKA-9398:
---------------------------------------

bbejeck commented on pull request #7814: KAFKA-9398: Interrupt StreamThread 
when close timeout reached and all threads aren't stopped
URL: https://github.com/apache/kafka/pull/7814
 
 
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Kafka Streams main thread may not exit even after close timeout has passed
> --------------------------------------------------------------------------
>
>                 Key: KAFKA-9398
>                 URL: https://issues.apache.org/jira/browse/KAFKA-9398
>             Project: Kafka
>          Issue Type: Improvement
>          Components: streams
>            Reporter: Bill Bejeck
>            Assignee: Bill Bejeck
>            Priority: Critical
>             Fix For: 2.5.0
>
>
> Kafka Streams offers the KafkaStreams.close() method when shutting down a 
> Kafka Streams application. There are two overloads to this method, one that 
> takes no parameters and another taking a Duration specifying how long the 
> close() method should block waiting for streams shut down operations to 
> complete. The no-arg version of close() sets the timeout to Long.MAX_VALUE.
> The issue is that if a StreamThread is taking to long to complete or if one 
> of the Consumer or Producer clients is in a hung state, the Kafka Streams 
> application won't exit even after the specified timeout has expired.
> For example, consider this scenario:
>  # A sink topic gets deleted by accident 
>  # The user sets Producer max.block.ms config to a high value
> In this case, the Producer will issue a WARN logging statement and will 
> continue to make metadata requests looking for the expected topic. The 
> {{Producer}} will continue making metadata requests up until the max.block.ms 
> expires. If this value is high enough, calling close() with a timeout won't 
> fix the issue as when the timeout expires, the Kafka Streams application's 
> main thread won't exit.
> To prevent this type of issue, we should call Thread.interrupt() on all 
> StreamThread instances once the close() timeout has expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to