What precise use case do you have in mind? If you don't have cluster
metadata, you can't send the requests anyway. If you want to bound your
memory and run out of it, that means that you are not able to send data for
some reason.

The best you can do in both cases is to drop old messages from the producer
buffers and favor the new ones. There is an ongoing discussion around
KIP-91 to set an absolute bound on the amount of time the application is
willing to wait for messages to be acknowledged. By setting this low
enough, you can always favor fresh messages over older ones. And when the
brokers are unavailable or simply overloaded, that's the best you can do
IMO.

On Fri, Aug 11, 2017 at 10:45 AM, Pavel Moukhataev <m_pas...@mail.ru.invalid
> wrote:

> Hi
>
> Sometimes kafka is used in nearly real-time java applications that has low
> latency requirements. In that case it is very important to minify latency.
> In kafka producer API there are two things that are done synchronously and
> can be optimized:
>  - cluster metadata fetch
>  - wait for free memory in buffer
>
> I suppose this API can be rewritten easily to satisfy real time needs.
> Cluster metada fetch can be done asynchronously. And to prevent blocking
> and waiting on out of memory block.on.buffer.full=false parameter
> and BufferExhaustedException can be reimplemented.
>
> So my question is does this change present in any roadmaps, does anybody
> already required it? If I create PR with that implemented will that be
> accepted?
>
> --
> С уважением, Павел
> +7-903-258-5544
> skype://pavel.moukhataev
>

Reply via email to