We have many production clusters with three topics in the 1-3MB range and the
rest in the multi-kb to sub-kb range. We do use gzip compression, implemented
at the broker rather than the producer level. The clusters don’t usually break
a sweat. We use MirrorMaker to aggregate these topics to a la
We have a use case where we want to produce data to kafka with max
size of 2 MB rarely (That is, based on user operations message size
will vary).
Whether producing 2 Mb size will have any impact or we need to split
the message to small chunk such as 100 KB and produce.
If we produce into small c
We have a use case where we want to produce data to kafka with max
size of 2 MB rarely (That is, based on user operations message size
will vary).
Whether producing 2 Mb size will have any impact or we need to split
the message to small chunk such as 100 KB and produce.
If we produce into small c