Assuming, that there are no duplicates in your input topic, as long as no failure occurs, the consumer will read every message exactly-once by default.
Only in case of failure, when the consumer "falls back" to an older offset, you might see some duplicates. You will need to write custom code to handle this case. Ie, the consumer needs to remember which messages it did process and which not. How you can achieve this, depends on your application logic though and cannot be answered in general. There are many resources on the Internet about this topic. I would suggest to do an extensive research. This problem is not even specific to Kafka and you will find many different solutions, allowing you to pick the solution that fits your use case best. Hope this helps. -Matthias On 5/22/18 11:30 PM, Karthick Kumar wrote: > Hi, > > Currently, I'm using kafka_2.11-0.10.2.0. In this working in almost once > semantic in Kafka consumer, But I need to change that in exactly once > semantics. >
signature.asc
Description: OpenPGP digital signature