gaurav-narula commented on code in PR #19607:
URL: https://github.com/apache/kafka/pull/19607#discussion_r2119820618


##########
storage/src/main/java/org/apache/kafka/storage/internals/log/LogSegment.java:
##########
@@ -751,10 +751,7 @@ public Optional<FileRecords.TimestampAndOffset> 
findOffsetByTimestamp(long times
     public void close() throws IOException {
         if (maxTimestampAndOffsetSoFar != TimestampOffset.UNKNOWN)
             Utils.swallow(LOGGER, Level.WARN, "maybeAppend", () -> 
timeIndex().maybeAppend(maxTimestampSoFar(), 
shallowOffsetOfMaxTimestampSoFar(), true));
-        Utils.closeQuietly(lazyOffsetIndex, "offsetIndex", LOGGER);
-        Utils.closeQuietly(lazyTimeIndex, "timeIndex", LOGGER);
-        Utils.closeQuietly(log, "log", LOGGER);
-        Utils.closeQuietly(txnIndex, "txnIndex", LOGGER);
+        Utils.closeAll(lazyOffsetIndex, lazyTimeIndex, log, txnIndex);

Review Comment:
   Thanks for pointing that! I've modified `LogSegments#close` to ensure we 
close all log segments even if one of them throws and added a test in 
`LogSegmentsTest`.
   
   In the process, I found that some tests assumed that resources like Indexes 
and FileRecords may be closed multiple times. With `LogSegment#close` now 
propagating exceptions, we need to exit early if the resources have been closed 
before, otherwise we'd see failures due to FileChannel being closed or mmap 
being `null`. I've therefore updated `AbstractIndex`, `TransactionIndex` and 
`FileRecords` appropriately.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to