[ 
https://issues.apache.org/jira/browse/KAFKA-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18007247#comment-18007247
 ] 

Chang-Yu Huang commented on KAFKA-16768:
----------------------------------------

Hi,

I would like to work on this issue. I have been analyzing the race condition 
between the {{Acceptor}} and {{Processor}} during shutdown.

My proposed solution is to make {{Processor#accept}} aware of the shutdown 
state. The method could return {{false}} if {{{}!shouldRun{}}}. Consequently, 
the calling {{Acceptor}} would be responsible for closing the {{SocketChannel}} 
upon receiving a {{false}} return. This approach maintains a clear separation 
of concerns, as the {{Processor}} doesn't manage the lifecycle of a channel it 
never queued.

An alternative I considered was modifying {{Processor#close}} to call 
{{closeAll()}} unconditionally. However, I noticed the {{if (!started.get)}} 
check seems to handle the specific case of a processor that failed to start. 
I'm hesitant to remove it as it might have unintended side effects.

I would appreciate your thoughts on this direction. If this approach seems 
reasonable, could you please assign this ticket to me?

> SocketServer leaks accepted SocketChannel instances due to race condition
> -------------------------------------------------------------------------
>
>                 Key: KAFKA-16768
>                 URL: https://issues.apache.org/jira/browse/KAFKA-16768
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 3.8.0
>            Reporter: Greg Harris
>            Assignee: Chang-Yu Huang
>            Priority: Major
>              Labels: newbie
>
> The SocketServer has threads for Acceptors and Processors. These threads 
> communicate via Processor#accept/Processor#configureNewConnections and the 
> `newConnections` queue.
> During shutdown, the Acceptor and Processors are each stopped by setting 
> shouldRun to false, and then shutdown proceeds asynchronously in all 
> instances together. This leads to a race condition where an Acceptor accepts 
> a SocketChannel and queues it to a Processor, but that Processor instance has 
> already started shutting down and has already drained the newConnections 
> queue.
> KAFKA-16765 is an analogous bug in NioEchoServer, which uses a completely 
> different implementation but has the same flaw.
> An example execution order that includes this leak:
> 1. Acceptor#accept() is called, and a new SocketChannel is accepted.
> 2. Acceptor#assignNewConnection() begins
> 3. Acceptor#close() is called, which sets shouldRun to false in the Acceptor 
> and attached Processor instances
> 4. Processor#run() checks the shouldRun variable, and exits the loop
> 5. Processor#closeAll() executes, and drains the `newConnections` variable
> 6. Processor#run() returns and the Processor thread terminates
> 7. Acceptor#assignNewConnection() calls Processor#accept(), which adds the 
> SocketChannel to `newConnections`
> 8. Acceptor#assignNewConnection() returns
> 9. Acceptor#run() checks the shouldRun variable and exits the loop, and the 
> Acceptor thread terminates.
> 10. Acceptor#close() joins all of the terminated threads, and returns
> At the end of this sequence, there are still open SocketChannel instances in 
> newConnections, which are then considered leaked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to