Hi Saurabh,

This is occurring because the produce message queue is full when a produce
request is made. The size of the queue is configured
via queue.buffering.max.messages. You can experiment with increasing this
(which will require more JVM heap space), or fiddling with
queue.enqueue.timeout.ms to control the blocking behavior when the queue is
full. Both of these configuration options are explained here:

https://kafka.apache.org/08/configuration.html

I didn't quite follow your last paragraph, so I'm not sure if the following
advice is applicable to you or not. You may also experiment with adding
more producers (either on the same or different machines).

I hope this helps.

Alex

On Thu, Feb 18, 2016 at 2:12 AM, Saurabh Kumar <saurabh...@gmail.com> wrote:

> Hi,
>
> We have a Kafka server deployment shared between multiple teams and i have
> created a topic with multiple partitions on it for pushing some JSON data.
>
> We have multiple such Kafka producers running from different machines which
> produce/push data to a Kafka topic. A lot of times i see the following
> exception in the logs : "*Event queue is full of unsent messages, could not
> send event"*
>
> Any idea how to solve this ? We can not synchronise the volume or timing of
> different Kafka producers across machines and between multiple processes.
> There is a limit on maximum number of concurrent processes (kafka producer)
>  that can run on a mchine but it is only going to increase with time as we
> scale. What is the right way to solve this problem ?
>
> Thanks,
> Saurabh
>



-- 
*Alex Loddengaard | **Solutions Architect | Confluent*
*Download Apache Kafka and Confluent Platform: www.confluent.io/download
<http://www.confluent.io/download>*

Reply via email to