[
https://issues.apache.org/jira/browse/KAFKA-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010033#comment-17010033
]
Ryanne Dolan commented on KAFKA-9351:
-------------------------------------
[~nitishgoyal13] retries in the producer wouldn't normally cause that many
dupes. It's more likely that a task was killed before committing cleanly. The
tasks will send a commit record during a clean shutdown, but that's not
guaranteed, obviously. A failure to commit could result in a large number of
dupes like that.
Exactly-once semantics would take care of this scenario as well.
> Higher count in destination cluster using Kafka MM2
> ---------------------------------------------------
>
> Key: KAFKA-9351
> URL: https://issues.apache.org/jira/browse/KAFKA-9351
> Project: Kafka
> Issue Type: Bug
> Components: mirrormaker
> Affects Versions: 2.4.0
> Reporter: Nitish Goyal
> Priority: Minor
>
> I have setup replication between cluster across different data centres. After
> setting up replication, at times, I am observing higher event count in
> destination cluster
> Below are counts in source and destination cluster
>
> *Source Cluster*
> ```
>
> events_4:0:51048
> events_4:1:52250
> events_4:2:51526
> ```
>
> *Destination Cluster*
> ```
> nm5.events_4:0:53289
> nm5.events_4:1:54569
> nm5.events_4:2:53733
> ```
>
> This is a blocker for us to start using MM2 replicatior
--
This message was sent by Atlassian Jira
(v8.3.4#803005)