[ 
https://issues.apache.org/jira/browse/KAFKA-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715586#comment-15715586
 ] 

Thomas Schulz commented on KAFKA-4473:
--------------------------------------

UPDATE:

I added the following config:
streamProperties.put(ProducerConfig.RETRIES_CONFIG, 3);
streamProperties.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, 1000);

With this configuration, I was not able to reproduce the message loss.

50_000 -> 50_001
100_000 -> 100_000
200_000 -> 200_000

This is probably the solution.

Thanks for the hint. With regard to this specific test setup, consider the bug 
as resolved.

> KafkaStreams does *not* guarantee at-least-once delivery
> --------------------------------------------------------
>
>                 Key: KAFKA-4473
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4473
>             Project: Kafka
>          Issue Type: Bug
>          Components: streams
>    Affects Versions: 0.10.1.0
>            Reporter: Thomas Schulz
>            Priority: Critical
>
> see: https://groups.google.com/forum/#!topic/confluent-platform/DT5bk1oCVk8
> There is probably a bug in the RecordCollector as described in my detailed 
> Cluster test published in the aforementioned post.
> The class RecordCollector has the following behavior:
> - if there is no exception, add the message offset to a map
> - otherwise, do not add the message offset and instead log the above statement
> Is it possible that this offset map contains the latest offset to commit? If 
> so, a message that fails might be overriden be a successful (later) message 
> and the consumer commits every message up to the latest offset?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to