kirktrue commented on code in PR #14359:
URL: https://github.com/apache/kafka/pull/14359#discussion_r1323758616


##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java:
##########
@@ -98,16 +115,59 @@ public void onFailure(RuntimeException e) {
         return fetchRequestMap.size();
     }
 
-    public void close(final Timer timer) {
-        if (!isClosed.compareAndSet(false, true)) {
-            log.info("Fetcher {} is already closed.", this);
-            return;
+    public Fetch<K, V> collectFetch() {
+        return fetchCollector.collectFetch(fetchBuffer);
+    }
+
+    protected void maybeCloseFetchSessions(final Timer timer) {
+        final List<RequestFuture<ClientResponse>> requestFutures = new 
ArrayList<>();
+        Map<Node, FetchSessionHandler.FetchRequestData> fetchRequestMap = 
prepareCloseFetchSessionRequests();
+
+        for (Map.Entry<Node, FetchSessionHandler.FetchRequestData> entry : 
fetchRequestMap.entrySet()) {
+            final Node fetchTarget = entry.getKey();
+            final FetchSessionHandler.FetchRequestData data = entry.getValue();
+            final FetchRequest.Builder request = 
createFetchRequest(fetchTarget, data);
+            final RequestFuture<ClientResponse> responseFuture = 
client.send(fetchTarget, request);
+
+            responseFuture.addListener(new 
RequestFutureListener<ClientResponse>() {
+                @Override
+                public void onSuccess(ClientResponse value) {
+                    handleCloseFetchSessionResponse(fetchTarget, data);
+                }
+
+                @Override
+                public void onFailure(RuntimeException e) {
+                    handleCloseFetchSessionResponse(fetchTarget, data, e);
+                }
+            });
+
+            requestFutures.add(responseFuture);
         }
 
+        // Poll to ensure that request has been written to the socket. Wait 
until either the timer has expired or until
+        // all requests have received a response.
+        while (timer.notExpired() && 
!requestFutures.stream().allMatch(RequestFuture::isDone)) {
+            client.poll(timer, null, true);
+        }
+
+        if (!requestFutures.stream().allMatch(RequestFuture::isDone)) {
+            // we ran out of time before completing all futures. It is ok 
since we don't want to block the shutdown
+            // here.
+            log.debug("All requests couldn't be sent in the specific timeout 
period {}ms. " +
+                    "This may result in unnecessary fetch sessions at the 
broker. Consider increasing the timeout passed for " +
+                    "KafkaConsumer.close(Duration timeout)", 
timer.timeoutMs());
+        }
+    }
+
+    @Override
+    protected void closeInternal(final Timer timer) {
         // Shared states (e.g. sessionHandlers) could be accessed by multiple 
threads (such as heartbeat thread), hence,
         // it is necessary to acquire a lock on the fetcher instance before 
modifying the states.
         synchronized (this) {
-            super.close(timer);
+            // we do not need to re-enable wakeups since we are closing already
+            client.disableWakeups();
+            maybeCloseFetchSessions(timer);
+            Utils.closeQuietly(decompressionBufferSupplier, 
"decompressionBufferSupplier");

Review Comment:
   I pushed a change to clean up how the resources in `AbstractFetch` and 
`Fetcher` are released in the `close()` methods. The unit and integration tests 
pass (locally) and it _makes_ sense to me, but please LMK if there's anything 
that doesn't sit right with the change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to