We have a use case where we want to produce data to kafka with max
size of 2 MB rarely (That is, based on user operations message size
will vary).

Whether producing 2 Mb size will have any impact or we need to split
the message to small chunk such as 100 KB and produce.

If we produce into small chunk, it will increase response time for the
user. Also, we have checked by producing 2 MB message to kafka and we
doesn't see much latency there.

Anyway if we split the data and produce, it doesn't have any impact in
disk size. But whether broker performance will degrade due to this?

Our broker configuration is:

RAM 125.6 GB Disk Size 2.9 TB Processors 40

Thanks,
Hemnath K B.

Reply via email to