Hi Dinh,
The check only de-duplicates in case the consumer processes the same
offset multiple times. It ensures the offset is always increasing.
If this has been fixed in Kafka, which the comment assumes, the
condition will never be true.
Which Kafka version are you using?
-Max
On 29.07.2
Hi,
I am curious about this comment:
if (offset < expected) { // -- (a)
// this can happen when compression is enabled in Kafka (seems to be
fixed in 0.10)
// should we check if the offset is way off from consumedOffset (say
> 1M)?
LOG.warn(
"{}: ignor