Hi Smirit,

You will have to change some of broker configuration like message.max.bytes
to a larger value. The default value is 1 MB guess.

Please check below configs:

Broker Configuration
<https://www.cloudera.com/documentation/kafka/latest/topics/kafka_performance.html#concept_gqw_rcz_yq__section_wsx_xcz_yq>

   -

   message.max.bytes

   Maximum message size the broker will accept. Must be smaller than the
   consumer fetch.message.max.bytes, or the consumer cannot consume the
   message.

   Default value: 1000000 (1 MB)
   -

   log.segment.bytes

   Size of a Kafka data file. Must be larger than any single message.

   Default value: 1073741824 (1 GiB)
   -

   replica.fetch.max.bytes

   Maximum message size a broker can replicate. Must be larger than
   message.max.bytes, or a broker can accept messages it cannot replicate,
   potentially resulting in data loss.

   Default value: 1048576 (1 MiB)

Thanks,
Akhilesh

On Wed, Apr 12, 2017 at 12:23 AM, Smriti Jha <smr...@agolo.com> wrote:

> Hello all,
>
> Can somebody shed light on kafka producer's behavior when the total size of
> all messages in the buffer (bounded by queue.buffering.max.ms) exceeds the
> socket buffer size (send.buffer.bytes)?
>
> I'm using Kafka v0.8.2 with the old Producer API and have noticed that our
> systems are dropping a few messages that are closer to 1MB in size. A few
> messages that are only a few KBs in size and are attempted to be sent
> around the same time as >1MB messages also get dropped. The official
> documentation does talk about never dropping a "send" in case the buffer
> has reached queue.buffering.max.messages but I don't think that applies to
> size of the messages.
>
> Thanks!
>

Reply via email to