Yes, the broker will store the batches it receives from the producer. By
default, it won't recompress, but it can be configured to recompress.
Either way, it won't merge multiple batches into a single one.

Ismael

On Tue, Dec 12, 2017 at 6:45 PM, Dmitry Minkovsky <dminkov...@gmail.com>
wrote:

> I would like to follow up with this more concrete question:
>
> For the purpose of achieving decent compression ratios, I am having
> difficulty finding information on whether the broker can perform additional
> record batching beyond the batching received from a producer. It seems like
> in a low-latency application where the producer recorder buffer and linger
> are small, it is difficult or impossible to achieve effective compression
> on the broker side, because the producer sending very small batches.
>
> So, am I correct in my understand that the broker appends record batches
> exactly as they are received from the producer? Or can it do its own
> batching to improve compression?
>
> Thank you,
> Dmitry
>
> On Sun, Dec 10, 2017 at 7:44 PM, Dmitry Minkovsky <dminkov...@gmail.com>
> wrote:
>
> > This is hopefully my final question for a while.
> >
> > I noticed that compression is disabled by default. Why is this? My best
> > guess is that compression doesn't work well for short messages
> > <https://www.elastic.co/blog/store-compression-in-lucene-
> and-elasticsearch>,
> > which was maybe identified as the majority use-case for Kafka. But,
> > producers batch records based on buffer/linger, and in my understanding
> the
> > whole record batch is compressed together.
> >
> > So, what's the deal? Should I turn on compression in production? Does it
> > depends on my anticipated batch size? I am using Kafka Streams with a
> very
> > low linger, so most of my batches will likely be very small.
> >
> > Thank you!
> >
> >
> >
>

Reply via email to