Hi Surendra,
I think this behaviour is documented at
https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka/#consumer-offset-committing
Best regards,
Martijn
On Tue, Dec 13, 2022 at 5:28 PM Surendra Lalwani via user <
user@flink.apache.org> wrote:
> Hi Team
Hi Team,
I am on Flink version 1.13.6. I am reading couple of streams from Kafka and
applying interval join with interval of 2 hours. However when I am checking
KafkaConsumer_records_lag_max it is coming in some thousands but when I
check Flink UI there is no backpressure and also the metrics insi