Flink Task/Operator metrics renaming
Hi team, We are using PrometheusReporter to expose Flink metrics to Prometheus. Is there possibility of renaming Task/Operators metric like numRecordsIn, numRecordsOut etc. and exposing it to Prometheus. Regards, Ashutosh
Re: Flink PrometheusReporter support for HTTPS
Hi Austin, I am deploying Flink on K8s with multiple Job Manager pods (For HA) & Task Manager pods. Each JobManager & Task Manager are running an PrometheusReporter instance and using Prometheus’ service discovery support for Kubernetes to discover all pods (Job Manager & Task Manager) and expose the container as targets Please let me know if a reverse proxy can work on this deployment as we have multiple JMs & TMs and cannot use static scrape targets Regards, Ashutosh On Sun, Jun 13, 2021 at 2:25 AM Austin Cawley-Edwards < austin.caw...@gmail.com> wrote: > Hi Ashutosh, > > How are you deploying your Flink apps? Would running a reverse proxy like > Nginx or Envoy that handles the HTTPS connection work for you? > > Best, > Austin > > On Sat, Jun 12, 2021 at 1:11 PM Ashutosh Uttam > wrote: > >> Hi All, >> >> Does PrometheusReporter provide support for HTTPS?. I couldn't find any >> information in flink documentation. >> >> Is there any way we can achieve the same? >> >> Thanks & Regards, >> Ashutosh >> >> >>
Flink PrometheusReporter support for HTTPS
Hi All, Does PrometheusReporter provide support for HTTPS?. I couldn't find any information in flink documentation. Is there any way we can achieve the same? Thanks & Regards, Ashutosh
Re: Query related to Minimum scrape interval for Prometheus and fetching metrics of all vertices in a job through Flink Rest API
Thanks Matthias. We are using Prometheus for fetching metrics. Is there any recommended scrape interval ? Also is there any impact if lower scrape intervals are used? Regards, Ashutosh On Fri, May 28, 2021 at 7:17 PM Matthias Pohl wrote: > Hi Ashutosh, > you can set the metrics update interval > through metrics.fetcher.update-interval [1]. Unfortunately, there is no > single endpoint to collect all the metrics in a more efficient way other > than the metrics endpoints provided in [2]. > > I hope that helps. > Best, > Matthias > > [1] > https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/config/#metrics-fetcher-update-interval > [2] > https://ci.apache.org/projects/flink/flink-docs-master/docs/ops/rest_api/ > > On Wed, May 26, 2021 at 2:01 PM Ashutosh Uttam > wrote: > >> Hi team, >> >> I have two queries as mentioned below: >> >> *Query1:* >> I am using PrometheusReporter to expose metrics to Prometheus Server. >> What should be the minimum recommended scrape interval to be defined on >> Prometheus server? >> Is there any interval in which Flink reports metrics? >> >> *Query2:* >> Is there any way I can fetch the metrics of all vertices (including >> subtasks) of a job through a single Monitoring Rest API of Flink. >> >> As of now what I have tried is first finding the vertices and then >> querying individual vertex for metrics as below: >> >> *Step 1:* Finding jobId (http://:/jobs) >> *Step 2:* Finding vertices Id (http://:/jobs/) >> *Step 3:* Finding aggregated metrics (including parallelism) of a >> vertex >> (http://:/jobs//vertices//subtasks/metrics?get=,) >> >> >> So like wise I have to invoke multiple rest apis for each vertex id . Is >> there any optimised way to get metrics of all vertices? >> >> >> Thanks & Regards, >> Ashutosh >> >
Query related to Minimum scrape interval for Prometheus and fetching metrics of all vertices in a job through Flink Rest API
Hi team, I have two queries as mentioned below: *Query1:* I am using PrometheusReporter to expose metrics to Prometheus Server. What should be the minimum recommended scrape interval to be defined on Prometheus server? Is there any interval in which Flink reports metrics? *Query2:* Is there any way I can fetch the metrics of all vertices (including subtasks) of a job through a single Monitoring Rest API of Flink. As of now what I have tried is first finding the vertices and then querying individual vertex for metrics as below: *Step 1:* Finding jobId (http://:/jobs) *Step 2:* Finding vertices Id (http://:/jobs/) *Step 3:* Finding aggregated metrics (including parallelism) of a vertex (http://:/jobs//vertices//subtasks/metrics?get=,) So like wise I have to invoke multiple rest apis for each vertex id . Is there any optimised way to get metrics of all vertices? Thanks & Regards, Ashutosh