Yes. It's good to enforce that. Could you file a jira and attach your patch
there?

Thanks,

Jun


On Thu, Aug 1, 2013 at 7:39 AM, Sam Meder <sam.me...@jivesoftware.com>wrote:

> Seems like a good idea to enforce this? Maybe something like this:
>
> diff --git a/core/src/main/scala/kafka/server/KafkaConfig.scala
> b/core/src/main/scala/kafka/server/KafkaConfig.scala
> index a64b210..1c3bfdd 100644
> --- a/core/src/main/scala/kafka/server/KafkaConfig.scala
> +++ b/core/src/main/scala/kafka/server/KafkaConfig.scala
> @@ -198,7 +198,7 @@ class KafkaConfig private (val props:
> VerifiableProperties) extends ZKConfig(pro
>    val replicaSocketReceiveBufferBytes =
> props.getInt(ReplicaSocketReceiveBufferBytesProp,
> ConsumerConfig.SocketBufferSize)
>
>    /* the number of byes of messages to attempt to fetch */
> -  val replicaFetchMaxBytes = props.getInt(ReplicaFetchMaxBytesProp,
> ConsumerConfig.FetchSize)
> +  val replicaFetchMaxBytes =
> props.getIntInRange(ReplicaFetchMaxBytesProp, ConsumerConfig.FetchSize,
> (messageMaxBytes, Int.MaxValue))
>
>    /* max wait time for each fetcher request issued by follower replicas*/
>    val replicaFetchWaitMaxMs = props.getInt(ReplicaFetchWaitMaxMsProp, 500)
>
> Not sure is message.max.bytes only counts payload or whole message + any
> headers, so it may be that it should be a bit larger even.
>
> /Sam
>
> On Aug 1, 2013, at 7:04 AM, Jun Rao <jun...@gmail.com> wrote:
>
> > server: replica.fetch.max.bytes should be >= message.max.bytes.
> Otherwise,
> > the follower will get stuck when replicating data from the leader.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Wed, Jul 31, 2013 at 10:10 AM, Sam Meder <sam.me...@jivesoftware.com
> >wrote:
> >
> >> I also noticed that there are two properties related to messages size on
> >> the server: replica.fetch.max.bytes and message.max.bytes. What happens
> >> when replica.fetch.max.bytes is lower than message.max.bytes? Should
> there
> >> even be two properties?
> >>
> >> /Sam
> >>
> >> On Jul 31, 2013, at 5:25 PM, Sam Meder <sam.me...@jivesoftware.com>
> wrote:
> >>
> >>> We're expecting to occasionally have to deal with pretty large messages
> >> being sent to Kafka. We will of course set the fetch size appropriately
> >> high, but are concerned about the behavior when the message exceeds the
> >> fetch size. As far as I can tell the current behavior when a message
> that
> >> is too large is encountered is to pretend it is not there and not notify
> >> the consumer in any way. IMO it would be better to throw an exception
> than
> >> silently ignoring the issue (with the current code one can't really
> >> distinguish a large message from no data at all).
> >>>
> >>> Thoughts?
> >>>
> >>> /Sam
> >>
> >>
>
>

Reply via email to