Achintya,

1.0.0.M2 is not an official release, so this version number is not
particularly meaningful to people on this list. What platform/distribution
are you using and how does this map to actual Apache Kafka releases?

In general, it is not possible for any system to guarantee exactly once
semantics because those semantics rely on the source and destination
systems coordinating -- the source provides some sort of retry semantics,
and the destination system needs to do some sort of deduplication or
similar to only "deliver" the data one time.

That said, duplicates should usually only be generated in the face of
failures. If you're seeing a lot of duplicates, that probably means
shutdown/failover is not being handled correctly. If you can provide more
info about your setup, we might be able to suggest tweaks that will avoid
these situations.

-Ewen

On Fri, Aug 5, 2016 at 8:15 AM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrote:

> Hi there,
>
> We are using Kafka 1.0.0.M2 with Spring and we see a lot of duplicate
> message is getting received by the Listener onMessage() method .
> We configured :
>
> enable.auto.commit=false
> session.timeout.ms=15000
> factory.getContainerProperties().setSyncCommits(true);
> factory.setConcurrency(5);
>
> So what could be the reason to get the duplicate messages?
>
> Thanks
> Achintya
>



-- 
Thanks,
Ewen

Reply via email to