Re: Kafka Quotas & Consumer Group Client ID (Flink 1.15)

2023-05-27 Thread Mason Chen
Hi Hatem, Before a PR, you would need to create a JIRA to track this issue and have a committer assign that JIRA to you. Make sure to go through https://flink.apache.org/how-to-contribute/overview/ as it will make contributions smoother. Best, Mason On Thu, May 25, 2023 at 10:30 AM Hatem Mostafa

Re: Kafka Quotas & Consumer Group Client ID (Flink 1.15)

2023-05-25 Thread Hatem Mostafa
Hello Mason, I created that PR for a suggestion on how to address the issue so that it would enable us to set client id. Happy to do any modifications to get this merged for the future. On Thu, May 25, 2023 at 12:55 AM Mason Chen wrote: > Hi H

Re: Kafka Quotas & Consumer Group Client ID (Flink 1.15)

2023-05-24 Thread Mason Chen
Hi Hatem, The reason for setting different client ids is to due to Kafka client metrics conflicts and the issue is documented here: https://nightlies.apache.org/flink/flink-docs-stable/docs/connectors/datastream/kafka/#kafka-consumer-metrics. I think that the warning log is benign if you are using

Re: Kafka Quotas & Consumer Group Client ID (Flink 1.15)

2023-05-24 Thread Hatem Mostafa
Hello Martijn, Yes, checkpointing is enabled and the offsets are committed without a problem. I think I might have figured out the answer to my second question based on my understanding of this code

Re: Kafka Quotas & Consumer Group Client ID (Flink 1.15)

2023-05-24 Thread Martijn Visser
Hi Hatem, Could it be that you don't have checkpointing enabled? Flink only commits its offset when a checkpoint has been completed successfully, as explained on https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#consumer-offset-committing Best regards, Martij

Kafka Quotas & Consumer Group Client ID (Flink 1.15)

2023-05-23 Thread Hatem Mostafa
Hello, I have two questions that are related to each other: *First question:* I have been trying to set `client.id` to set a kafka client quota for consumer_byte_rate since whenever our kafka job gets redeployed it reads a lot of data f