[ 
https://issues.apache.org/jira/browse/KAFKA-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16999171#comment-16999171
 ] 

Tomoyuki Saito commented on KAFKA-9301:
---------------------------------------

Duplicate of KAFKA-9312.
I'll close this issue.

> KafkaProducer#flush should block until all the sent records get completed
> -------------------------------------------------------------------------
>
>                 Key: KAFKA-9301
>                 URL: https://issues.apache.org/jira/browse/KAFKA-9301
>             Project: Kafka
>          Issue Type: Bug
>          Components: producer 
>    Affects Versions: 0.11.0.0
>            Reporter: Tomoyuki Saito
>            Priority: Major
>
> h2. ProducerBatch split makes ProducerBatch.produceFuture completed 
> KAFKA-3995 introduced ProducerBatch split and resend when 
> RecordTooLargeException happens on broker side. When ProducerBatch split 
> happens, ProducerBatch.produceFuture becomes completed, even though records 
> in a batch will be resent to a broker.
> h2. KafkaProducer#flush implementation
> With the current implementation, KafkaProducer#flush blocks until accumulated 
> ProducerBatches get completed. As described above, that does not ensure all 
> the sent records get completed.
> This issue is also mentioned in: https://github.com/apache/kafka/pull/6469



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to