You just need to set batch.size to be less than 2MB. max.request.size in
the producer has to be less than socket.request.max.bytes in the broker.
Thanks,
Jun
On Wed, Sep 10, 2014 at 2:22 PM, Bhavesh Mistry
wrote:
> Hi Jun,
>
> Thank for highlighting. Please correct me my understanding. If th
Hi Jun,
Thank for highlighting. Please correct me my understanding. If the max
request size <= message.max.bytes then batch size will be optimally decided
( batch size will be determine by either max.request.size limit or
batch.size which ever is less).
Here is configuration parameter I was re
Actually, with the new producer, you can configure the batch size in bytes.
If you set the batch size to be smaller than the max message size, messages
exceeding the max message limit will be in its own batch. Then, only one
message will be rejected by the broker.
Thanks,
Jun
On Tue, Sep 9, 2014
Hi Jun,
Is there any plug-ability that Developer can customize batching logic or
inject custom code for this ? Shall I file Jira for this issues.
Thanks,
Bhavesh
On Tue, Sep 9, 2014 at 3:52 PM, Jun Rao wrote:
> No, the new producer doesn't address that problem.
>
> Thanks,
>
> Jun
>
> On Tue,
No, the new producer doesn't address that problem.
Thanks,
Jun
On Tue, Sep 9, 2014 at 12:59 PM, Bhavesh Mistry
wrote:
> HI Jun,
>
> Thanks for clarification. Follow up questions, does new producer solve the
> issues highlight. In event of compression and async mode in new producer,
> will it
HI Jun,
Thanks for clarification. Follow up questions, does new producer solve the
issues highlight. In event of compression and async mode in new producer,
will it break down messages to this UPPER limit and submit or new producer
strictly honor batch size. I am just asking if compression batc
Are you using compression in the producer? If so, message.max.bytes applies
to the compressed size of a batch of messages. Otherwise, message.max.bytes
applies to the size of each individual message.
Thanks,
Jun
On Wed, Sep 3, 2014 at 3:25 PM, Bhavesh Mistry
wrote:
> I am referring to wiki htt
are you referring to socket.request.max.bytes ?
it looks like it could indeed limit the size of a batch accepted by a
broker.
So, you're right, batch.num.messages * message.max.bytes must be smaller
than socket.request.max.bytes.
It looks like this case has been addressed in the new producer.
See
I am referring to wiki http://kafka.apache.org/08/configuration.html and
following parameter control max batch message bytes as far as I know.
Kafka Community, please correct me if I am wrong. I do not want to create
confusion for Kafka User Community here. Also, if you increase this limit
than
Hi Bhavesh
can you explain what limit you're referring to?
I'm asking because `message.max.bytes` is applied per message not per batch.
is there another limit I should be aware of?
thanks
On Wed, Sep 3, 2014 at 2:07 PM, Bhavesh Mistry
wrote:
> Hi Jun,
>
> We have similar problem. We have var
Hi Jun,
We have similar problem. We have variable length of messages. So when we
have fixed size of Batch sometime the batch exceed the limit set on the
brokers (2MB).
So can Producer have some extra logic to determine the optimal batch size
by looking at configured message.max.bytes value.
D
Thanks Jun.
I'll create a jira and try to provide a patch. I think this is pretty
serious.
On Friday, August 29, 2014, Jun Rao wrote:
> The goal of batching is mostly to reduce the # RPC calls to the broker. If
> compression is enabled, a larger batch typically implies better compression
> rati
The goal of batching is mostly to reduce the # RPC calls to the broker. If
compression is enabled, a larger batch typically implies better compression
ratio.
The reason that we have to fail the whole batch is that the error code in
the produce response is per partition, instead of per message.
Re
Could you explain the goals of batches? I was assuming this was simply a
performance optimization, but this behavior makes me think I'm missing
something.
is a batch more than a list of *independent* messages?
Why would you reject the whole batch? One invalid message causes the loss
of batch.num.m
That's right. If one message in a batch exceeds the size limit, the whole
batch is rejected.
When determining message.max.bytes, the most important thing to consider is
probably memory since currently we need to allocate memory for a full
message in the broker and the producer and the consumer cli
am I miss reading this loop:
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/log/Log.scala#L265-L269
it seems like all messages from `validMessages` (which is
ByteBufferMessageSet) are NOT appended if one of the message size exceeds
the limit.
I hope I'm missing something.
Hi Jun,
thanks for you answer.
Unfortunately the size won't help much, I'd like to see the actual message
data.
By the way what are the things to consider when deciding on
`message.max.bytes` value?
On Wed, Aug 27, 2014 at 9:06 PM, Jun Rao wrote:
> The message size check is currently only
The message size check is currently only done on the broker. If you enable
trace level logging in RequestChannel, you will see the produce request,
which includes the size of each partition.
Thanks,
Jun
On Wed, Aug 27, 2014 at 4:40 PM, Alexis Midon <
alexis.mi...@airbedandbreakfast.com> wrote:
Hello,
my brokers are reporting that some received messages exceed the
`message.max.bytes` value.
I'd like to know what producers are at fault but It is pretty much
impossible:
- the brokers don't log the content of the rejected messages
- the log messages do not contain the IP of the producers
-
19 matches
Mail list logo