You just need to make sure that fetch size is larger than max.message.size
set in the brokers.
Thanks,
Jun
On Thu, May 15, 2014 at 3:55 PM, MB JA arma.mar...@gmail.com wrote:
Hi.
I´m using Kafka, Can you help me please? :)
I´m have the problem to read the next message available when is
Neha,
Does the broker store messages compressed, even if the producer doesn't
compress them when sending them to the broker?
Why does the broker re-compress message batches? Does it not have enough
info from the producer request to know the number of messages in the batch?
Jason
On Mon, Oct
Ah,
I think I remember a previous discussion on a way to avoid the double
compression
So would it be possible for the producer to send metadata with a compressed
batch that includes the logical offset info for the batch? Can this info
just be a count of how many messages are in the batch?
And to be clear, if uncompressed messages come in, they remain uncompressed
in the broker, correct?
Correct
Currently, only the broker has knowledge of the offsets for a partition and
hence is the right place to assign the offsets. Even if the producer sends
metadata, the broker still needs to
At LinkedIn, our message size can be 10s of KB. This is mostly because we
batch a set of messages and send them as a single compressed message.
Thanks,
Jun
On Mon, Oct 7, 2013 at 7:44 AM, S Ahmed sahmed1...@gmail.com wrote:
When people using message queues, the message size is usually pretty
I see, so that is one thing to consider is if I have 20 KB messages, I
shouldn't batch too many together as that will increase latency and the
memory usage footprint on the producer side of things.
On Mon, Oct 7, 2013 at 11:55 AM, Jun Rao jun...@gmail.com wrote:
At LinkedIn, our message size
The message size limit is imposed on the compressed message. To answer your
question about the effect of large messages - they cause memory pressure on
the Kafka brokers as well as on the consumer since we re-compress messages
on the broker and decompress messages on the consumer.
I'm not so sure
the total message size of the batch should be less than
message.max.bytes or is that for each individual message?
The former is correct.
When you batch, I am assuming that the producer sends some sort of flag
that this is a batch, and then the broker will split up those messages to
individual