[ 
https://issues.apache.org/jira/browse/KAFKA-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17126893#comment-17126893
 ] 

Ismael Juma commented on KAFKA-10101:
-------------------------------------

Discussed with Jun a bit offline. This is a bit unlikely in practice as the 
hard failure has to happen after we write the recovery point (which happens in 
a scheduled thread) and before we roll/flush the truncated segment. Still, good 
to fix it. Also, it's easier to reason about if we only update the recovery 
point after we roll segments. While looking at the code, we noticed a few more 
edge cases. Will try to fix them at the same time.

> recovery point is advanced without flushing the data after recovery
> -------------------------------------------------------------------
>
>                 Key: KAFKA-10101
>                 URL: https://issues.apache.org/jira/browse/KAFKA-10101
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 2.5.0
>            Reporter: Jun Rao
>            Assignee: Ismael Juma
>            Priority: Major
>
> Currently, in Log.recoverLog(), we set recoveryPoint to logEndOffset after 
> recovering the log segment. However, we don't flush the log segments after 
> recovery. The potential issue is that if the broker has another hard failure, 
> segments may be corrupted on disk but won't be going through recovery on 
> another restart.
> This logic was introduced in KAFKA-5829 since 1.0.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to