kamalcph commented on PR #15634:
URL: https://github.com/apache/kafka/pull/15634#issuecomment-2029474481

   > just curious. Does it happens only if remote storage is enabled? According 
to the description:
   > 
   > > The follower sends the first FETCH request to the leader, the leader 
checks whether the isFollowerInSync, then expands the ISR. Also, parks the 
request in DelayedFetchPurgatory. If the replica was elected as leader before 
the fetch-response gets processed, then the new-leader will have wrong 
high-watermark.
   > 
   > It looks like the issue is existent even though we don't use remote 
storage.
   
   For normal topic, once the replica becomes leader. It is able to 
[resolve/convert](https://sourcegraph.com/github.com/apache/kafka@40e87ae35beb389d6419d32130174d7c68fa4d19/-/blob/core/src/main/scala/kafka/log/UnifiedLog.scala#L319)
 the highwatermark offset (log-start-offset) to metadata by reading the segment 
from disk and then it updates the high-watermark to either 
current-leader-log-end-offset (or) the lowest LEO of all the eligible-isr 
replicas. In case of remote topic, the replica will fail to 
[resolve](https://sourcegraph.com/github.com/apache/kafka@40e87ae35beb389d6419d32130174d7c68fa4d19/-/blob/core/src/main/scala/kafka/log/UnifiedLog.scala#L319)
 the highwatermark offset (log-start-offset) to metadata since the segment 
won't be in local-disk, and then fail continuously.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to