Guozhang,
Thank you very much for your reply. It is good to know that I have not
overlooked something simple in the interface.
How to I open a jira.
The major change that I would like to see is to have a
KafkaProducer.abort() method that closes the producer immediately,
aborting attempts to
Hello Andrew,
I think you would want a sync producer for your use case? You can try to
call get() on the returned metadata future of the send() call instead of
using a callback; the pattern is something like:
for (message in messages)
producer.send(message).get()
The get() call will block
Hi Guozhang,
I specifically to not want to do a get() on every future. This is
equivalent to going synced, which is a performance killer for us.
What I am looking for is some way to tell a producer to forget about queued
messages. I have done some more testing on my solution and it needs some
Andrew
If you see BaseProducer.scala, there's the following code snippet
override def send(topic: String, key: Array[Byte], value: Array[Byte]) {
val record = new ProducerRecord(topic, key, value)
if(sync) {
this.producer.send(record).get()
} else {
I am trying to understand the best practices for working with the new
(0.8.2) Producer interface.
We have a process in a large server that writes a lot of data to Kafka.
However, this data is not mission critical. When a problem arises writing
to Kafka, most specifically network issues, but also