I do not think we could add an additional port to the rest service since it
is created by Flink internally.

Actually, I do not suggest scrapping the metrics from rest service.
Instead, the port in the pod needs to be used.
Because the metrics might not work correctly if multiple JobManagers are
running.


Best,
Yang

Javier Vegas <jve...@strava.com> 于2022年9月5日周一 15:00写道:

> What I would need is to set
>
> ports:
>
>     - name: metrics
>
>       port: 9999
>
>       protocol: TCP
>
>
>
> in the generated YAML fir the appname-rest service which properly
> aggregates the metrics from the pods, but I can't not figure out either
> from the job deployment file or modifying the operator templates in the
> Helm chart. Any way I can modify the ports in the Flink rest service?
>
>
> Thanks,
>
>
> Javier Vegas
>
>
>
> El dom, 4 sept 2022 a las 1:59, Javier Vegas (<jve...@strava.com>)
> escribió:
>
>> Hi, Biao!
>>
>> Thanks for the fast response! Setting that in the podTemplate opens the
>> metrics port in the pods, but unfortunately not on the rest service. Not
>> sure if that is standard procedure, but my Prometheus setup scraps the
>> metrics port on services but not pods. On my previous non-operator
>> standalone setup, the metrics port on the service was aggregating all the
>> pods metrics and then Prometheus was scrapping that, so I was trying to
>> reproduce that by opening the port on the rest service.
>>
>>
>>
>> El dom, 4 sept 2022 a las 1:03, Geng Biao (<biaoge...@gmail.com>)
>> escribió:
>>
>>> Hi Javier,
>>>
>>>
>>>
>>> You can use podTemplate to expose the port in the flink containers.
>>>
>>> Here is a snippet:
>>>
>>> spec:
>>>
>>>   flinkVersion: v1_15
>>>
>>>   flinkConfiguration:
>>>
>>>     state.savepoints.dir: file:///flink-data/flink-savepoints
>>>
>>>     state.checkpoints.dir: file:///flink-data/flink-checkpoints
>>>
>>> *    metrics.reporter.prom.factory.class:
>>> org.apache.flink.metrics.prometheus.PrometheusReporterFactory*
>>>
>>>   serviceAccount: flink
>>>
>>>   podTemplate:
>>>
>>> metadata:
>>>
>>>       annotations:
>>>
>>>         prometheus.io/path: /metrics
>>>
>>>         prometheus.io/port: "9249"
>>>
>>>         prometheus.io/scrape: "true"
>>>
>>>     spec:
>>>
>>>       serviceAccount: flink
>>>
>>>       containers:
>>>
>>>         - name: flink-main-container
>>>
>>>           volumeMounts:
>>>
>>>             - mountPath: /flink-data
>>>
>>>               name: flink-volume
>>>
>>>          * ports:*
>>>
>>> *            - containerPort: 9249*
>>>
>>> *              name: metrics*
>>>
>>> *              protocol: TCP*
>>>
>>>       volumes:
>>>
>>>         - name: flink-volume
>>>
>>>           emptyDir: {}
>>>
>>>
>>>
>>> The bold line are about how to specify the metric reporter and expose
>>> the metric. The annotations are not required if you use PodMonitor or
>>> ServiceMonitor. Hope it can help!
>>>
>>>
>>>
>>> Best,
>>>
>>> Biao Geng
>>>
>>>
>>>
>>> *From: *Javier Vegas <jve...@strava.com>
>>> *Date: *Sunday, September 4, 2022 at 10:19 AM
>>> *To: *user <user@flink.apache.org>
>>> *Subject: *How to open a Prometheus metrics port on the rest service
>>> when using the Kubernetes operator?
>>>
>>> I am migrating my Flink app from standalone Kubernetes to the Kubernetes
>>> operator, it is going well but I ran into a problem, I can not figure out
>>> how to open a Prometheus metrics port in the rest-service to collect all my
>>> custom metrics from the task managers. Note that this is different from the
>>> instructions to "How to Enable Prometheus"
>>> https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/metrics-logging/#how-to-enable-prometheus-example
>>> that example is to collect the operator pod metrics, but what I am trying
>>> to do is open a port on the rest service to make my job metrics available
>>> to Prometheus.
>>>
>>>
>>>
>>> Thanks,
>>>
>>>
>>>
>>> Javier Vegas
>>>
>>

Reply via email to