Github user NicoK commented on a diff in the pull request: https://github.com/apache/flink/pull/4499#discussion_r140821090 --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java --- @@ -183,18 +214,40 @@ void notifySubpartitionConsumed() { } /** - * Releases all received buffers and closes the partition request client. + * Releases all received and available buffers, closes the partition request client. */ @Override void releaseAllResources() throws IOException { if (isReleased.compareAndSet(false, true)) { + + final List<MemorySegment> recyclingSegments = new ArrayList<>(); + synchronized (receivedBuffers) { Buffer buffer; while ((buffer = receivedBuffers.poll()) != null) { - buffer.recycle(); + if (buffer.getRecycler() == this) { + recyclingSegments.add(buffer.getMemorySegment()); --- End diff -- 1) I think, performance is not much of an issue here, as this is only called during take-down of a connection and the overhead of `Buffer#recycle` is actually not that much. 2) Sorry, but I don't get your concern. Why would you need an extra check when using `buffer.recycle()` instead of `exclusiveRecyclingSegments.add(buffer.getMemorySegment())`? There shouldn't be anything special for the exclusive buffers in this regard compared to ordinary buffers (which is the beauty of the design). Let me give the example of how `LocalBufferPool` handles this inside `lazyDestroy`: it returns every memory segment (one by one) with `networkBufferPool.recycle()` and, at its end, it is calling `networkBufferPool.destroyBufferPool()` so that the book-keeping inside the `NetworkBufferPool` is updated and buffers may be re-distributed to other `LocalBufferPool` instances. We could do this similarly here: recycle one by one, then call some method to update book-keeping and balancing inside `NetworkBufferPool`.
---