Hi, running Flink 1.10.0 we see these logs once in a while... 2020-10-21 15:
48:57,625 INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer
clientId=consumer-2, groupId=xxxxxx-import] Error sending fetch request
(sessionId=806089934, epoch=INITIAL) to node 0:
org.apache.kafka.common.errors.DisconnectException.

Obviously it looks like the consumer is getting disconnected and from what
it seems it's either a Kafka bug on the way it handles the EPOCH or
possibly version mismatch between client and brokers. That's fine I can
look at upgrading the client and/or Kafka. But I'm trying to understand
what happens in terms of the source and the sink. It looks let we get
duplicates on the sink and I'm guessing it's because the consumer is
failing and at that point Flink stays on that checkpoint until it can
reconnect and process that offset and hence the duplicates downstream?

Reply via email to