ccding commented on a change in pull request #11345: URL: https://github.com/apache/kafka/pull/11345#discussion_r764130805
########## File path: core/src/main/scala/kafka/log/UnifiedLog.scala ########## @@ -1498,28 +1498,44 @@ class UnifiedLog(@volatile var logStartOffset: Long, producerStateManager.takeSnapshot() updateHighWatermarkWithLogEndOffset() // Schedule an asynchronous flush of the old segment - scheduler.schedule("flush-log", () => flush(newSegment.baseOffset)) + scheduler.schedule("flush-log", () => flushUptoOffsetExclusive(newSegment.baseOffset)) newSegment } /** * Flush all local log segments + * + * @param forceFlushActiveSegment should be true during a clean shutdown, and false otherwise. The reason is that + * we have to pass logEndOffset + 1 to the `localLog.flush(offset: Long): Unit` function to flush empty + * active segments, which is important to make sure we persist the active segment file during shutdown, particularly + * when it's empty. */ - def flush(): Unit = flush(logEndOffset) + def flush(forceFlushActiveSegment: Boolean): Unit = flush(logEndOffset, forceFlushActiveSegment) Review comment: If we have 50MB/s produce input in a three-broker cluster and segment.size is 100MB, it is on average 1 extra fsync every 2 seconds. Given the extra fsync is to sync an empty file, it may not be a big deal. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org