[ 
https://issues.apache.org/jira/browse/KAFKA-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16283444#comment-16283444
 ] 

Erik Scheuter commented on KAFKA-6325:
--------------------------------------

I didn't modify the KafkaProducer, but the code which uses it in a way I loop 
through all futures and do a future.get() instead of producer.flush().

Option 2 is the easiest option; change the javadoc of the send() function as 
well as this isn't completly async (or should this be another issue?).

> Producer.flush() doesn't throw exception on timeout
> ---------------------------------------------------
>
>                 Key: KAFKA-6325
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6325
>             Project: Kafka
>          Issue Type: Bug
>          Components: producer 
>            Reporter: Erik Scheuter
>         Attachments: FlushTest.java
>
>
> Reading the javadoc of the flush() method we assumed an exception would've 
> been thrown when an error occurs. This would make the code more 
> understandable as we don't have to return a list of futures if we want to 
> send multiple records to kafka and eventually call future.get().
> When send() is called, the metadata is retrieved and send is blocked on this 
> process. When this process fails (no brokers) an FutureFailure is returned. 
> When you just flush; no exceptions will be thrown (in contrast to 
> future.get()). Ofcourse you can implement callbacks in the send method.
> I think there are two solutions:
> * Change flush() (& doSend()) and throw exceptions
> * Change the javadoc and describe the scenario you can lose events because no 
> exceptions are thrown and the events are not sent.
> I added an unittest to show the behaviour. Kafka doesn't have to be available 
> for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to