divijvaidya commented on code in PR #15472:
URL: https://github.com/apache/kafka/pull/15472#discussion_r1513268220


##########
core/src/main/java/kafka/log/remote/RemoteLogManager.java:
##########
@@ -707,6 +708,8 @@ public void copyLogSegmentsToRemote(UnifiedLog log) throws 
InterruptedException
                 this.cancel();
             } catch (InterruptedException | RetriableException ex) {
                 throw ex;
+            } catch (CorruptIndexException ex) {
+                logger.error("Error occurred while copying log segments. Index 
appeared to be corrupted for partition: {}  ", topicIdPartition, ex);

Review Comment:
   I am assuming that the way this error will be monitored is by creating an 
alarm on `RemoteCopyLagSegments` [1]. Is that right?
   
   Can you also please why shouldn't we increment the 
`failedRemoteCopyRequestRate` and `failedRemoteCopyRequestRate` metric that are 
being incremented in catch exception below?
   
   
   
   [1] https://kafka.apache.org/documentation.html#tiered_storage_monitoring
    



##########
storage/src/main/java/org/apache/kafka/storage/internals/log/TimeIndex.java:
##########
@@ -75,13 +75,14 @@ public void sanityCheck() {
         TimestampOffset entry = lastEntry();
         long lastTimestamp = entry.timestamp;
         long lastOffset = entry.offset;
-        if (entries() != 0 && lastTimestamp < timestamp(mmap(), 0))
-            throw new CorruptIndexException("Corrupt time index found, time 
index file (" + file().getAbsolutePath() + ") has "
-                + "non-zero size but the last timestamp is " + lastTimestamp + 
" which is less than the first timestamp "
-                + timestamp(mmap(), 0));
+
         if (entries() != 0 && lastOffset < baseOffset())
             throw new CorruptIndexException("Corrupt time index found, time 
index file (" + file().getAbsolutePath() + ") has "
                 + "non-zero size but the last offset is " + lastOffset + " 
which is less than the first offset " + baseOffset());
+        if (entries() != 0 && lastTimestamp < timestamp(mmap(), 0))

Review Comment:
   I am assuming that the reason of moving this down is to use the less 
expensive validation first?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to