Any thoughts this doesn't seem to create duplicates all the time or maybe
it's unrelated as we are still seeing the message and there is no
duplicates...

On Wed., Oct. 21, 2020, 12:09 p.m. John Smith, <java.dev....@gmail.com>
wrote:

> And yes my downstream is handling the duplicates in an idempotent way so
> we are good on that point. But just curious what the behaviour is on the
> source consumer when that error happens.
>
> On Wed, 21 Oct 2020 at 12:04, John Smith <java.dev....@gmail.com> wrote:
>
>> Hi, running Flink 1.10.0 we see these logs once in a while... 2020-10-21
>> 15:48:57,625 INFO org.apache.kafka.clients.FetchSessionHandler - [
>> Consumer clientId=consumer-2, groupId=xxxxxx-import] Error sending fetch
>> request (sessionId=806089934, epoch=INITIAL) to node 0:
>> org.apache.kafka.common.errors.DisconnectException.
>>
>> Obviously it looks like the consumer is getting disconnected and from
>> what it seems it's either a Kafka bug on the way it handles the EPOCH or
>> possibly version mismatch between client and brokers. That's fine I can
>> look at upgrading the client and/or Kafka. But I'm trying to understand
>> what happens in terms of the source and the sink. It looks let we get
>> duplicates on the sink and I'm guessing it's because the consumer is
>> failing and at that point Flink stays on that checkpoint until it can
>> reconnect and process that offset and hence the duplicates downstream?
>>
>

Reply via email to