Hi,

We recently ran into a scenario where we initiate a FetechRequest with a
fixed fetchSize (64k) shown below using Simple Consumer. When the broker
contains an unusually large sized message, this resulted in the broker
returns an empty message set *without any error code*. According to the
document
<https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example>,
this is the expected behavior (*"**Note also that we ask for a fetchSize of
100000 bytes. If the Kafka producers are writing large batches, this might
not be enough, and might return an empty message set. In this case, the
fetchSize should be increased until a non-empty set is returned."*).

However, I am arguing against the behavior of not returning an error code,
because without any indication, the consumer would keep retrying without
realizing whether it's at the end of offset or an actual problem with the
message size. This makes monitoring hard as well. Was there any particular
reason why error code was not returned in this sitation?


FetchRequest req = new FetchRequestBuilder()
        .clientId(clientName)
        .addFetch(a_topic, a_partition, readOffset, 1024 * 64)
        .build();


Thanks!
Z.

Reply via email to