[
https://issues.apache.org/jira/browse/KAFKA-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16722347#comment-16722347
]
Dong Lin commented on KAFKA-7297:
---------------------------------
[~ijuma] Atomic update is another good alternative to the read/write lock
discussed above. Note that atomic update still requires lock to avoid race
condition between concurrent mutations. If mutations are rare compared to
reads, then both solutions should have low performance overhead. Atomic update
has a bit extra memory (due to copy of the segments) overhead whereas the
read/write lock solution has a bit extra lock overhead (due to the race
condition between concurrent mutation and read operation). Is there a bit more
detail to understand why atomic update is better in this case?
> Both read/write access to Log.segments should be protected by lock
> ------------------------------------------------------------------
>
> Key: KAFKA-7297
> URL: https://issues.apache.org/jira/browse/KAFKA-7297
> Project: Kafka
> Issue Type: Improvement
> Reporter: Dong Lin
> Assignee: Zhanxiang (Patrick) Huang
> Priority: Major
>
> Log.replaceSegments() updates segments in two steps. It first adds new
> segments and then remove old segments. Though this operation is protected by
> a lock, other read access such as Log.logSegments does not grab lock and thus
> these methods may return an inconsistent view of the segments.
> As an example, say Log.replaceSegments() intends to replace segments [0,
> 100), [100, 200) with a new segment [0, 200). In this case if Log.logSegments
> is called right after the new segments are added, the method may return
> segments [0, 200), [100, 200) and messages in the range [100, 200) may be
> duplicated if caller choose to enumerate all messages in all segments
> returned by the method.
> The solution is probably to protect read/write access to Log.segments with
> read/write lock.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)