[ 
https://issues.apache.org/jira/browse/KAFKA-6120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16240697#comment-16240697
 ] 

ASF GitHub Bot commented on KAFKA-6120:
---------------------------------------

Github user asfgit closed the pull request at:

    https://github.com/apache/kafka/pull/4148


> RecordCollectorImpl should not retry sending
> --------------------------------------------
>
>                 Key: KAFKA-6120
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6120
>             Project: Kafka
>          Issue Type: Bug
>          Components: streams
>    Affects Versions: 1.0.0
>            Reporter: Matthias J. Sax
>            Assignee: Matthias J. Sax
>              Labels: streams-exception-handling, streams-resilience
>             Fix For: 1.1.0
>
>
> Currently, RecordCollectorImpl implements an internal retry loop for sending 
> data with a hard coded retry maximum. This raises the problem, that data 
> might be send out-of-order while at the same time, does not improve the 
> overall resilience much, as the number of retires is hardcoded.
> Thus, we should remove this loop and only rely an producer configuration 
> parameter {{retires}} that uses can configure accordingly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to