chia7712 commented on code in PR #20847:
URL: https://github.com/apache/kafka/pull/20847#discussion_r2579361408


##########
clients/src/main/java/org/apache/kafka/common/utils/BufferSupplier.java:
##########
@@ -57,7 +57,7 @@ public static BufferSupplier create() {
      * Return the provided buffer to be reused by a subsequent call to `get`.
      */
     public abstract void release(ByteBuffer buffer);
-
+    

Review Comment:
   could you remove this change?



##########
clients/src/test/java/org/apache/kafka/common/record/BufferSupplierTest.java:
##########
@@ -44,5 +44,4 @@ public void testGrowableBuffer() {
         assertEquals(2048, increased.capacity());
         assertEquals(0, increased.position());
     }
-

Review Comment:
   ditto



##########
coordinator-common/src/main/java/org/apache/kafka/coordinator/common/runtime/CoordinatorRuntime.java:
##########
@@ -772,13 +779,17 @@ private void freeCurrentBatch() {
             // Cancel the linger timeout.
             currentBatch.lingerTimeoutTask.ifPresent(TimerTask::cancel);
 
-            // Release the buffer only if it is not larger than the 
maxBatchSize.
-            int maxBatchSize = partitionWriter.config(tp).maxMessageSize();
+            // Release the buffer only if it is not larger than the max buffer 
size.
+            int maxBufferSize = appendMaxBufferSizeSupplier.get();
 
-            if (currentBatch.builder.buffer().capacity() <= maxBatchSize) {
+            if (currentBatch.builder.buffer().capacity() <= maxBufferSize) {
                 bufferSupplier.release(currentBatch.builder.buffer());
-            } else if (currentBatch.buffer.capacity() <= maxBatchSize) {
+                cachedBufferSize = currentBatch.builder.buffer().capacity();
+            } else if (currentBatch.buffer.capacity() <= maxBufferSize) {
                 bufferSupplier.release(currentBatch.buffer);
+                cachedBufferSize = currentBatch.buffer.capacity();
+            } else {
+                runtimeMetrics.recordAppendBufferDiscarded();

Review Comment:
   should we set `cachedBufferSize` to zero?



##########
coordinator-common/src/main/java/org/apache/kafka/coordinator/common/runtime/CoordinatorRuntime.java:
##########
@@ -2127,6 +2143,10 @@ private CoordinatorRuntime(
         this.compression = compression;
         this.appendLingerMs = appendLingerMs;
         this.executorService = executorService;
+        this.appendMaxBufferSizeSupplier = appendMaxBufferSizeSupplier;
+        this.runtimeMetrics.registerAppendBufferSizeGauge(
+            () -> coordinators.values().stream().mapToLong(c -> 
c.cachedBufferSize).sum()

Review Comment:
   Using `Supplier<Long>` necessiates synchronization, as the 
`cachedBufferSize` value is accessed by different threads



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to