I have not.. I'm working on trying to see if it can work.
Thank you
I still believe Prometheus would benefit from official support for this -
like what I linked to, others built.
mandag den 14. december 2020 kl. 16.06.35 UTC+1 skrev Christian Hoffmann:
> Hi,
>
> On 2020-12-1
Hi everyone,
I would like to adjust alert levels for f.ex. disk space - so hosts that
match som tag (like environment: dev) has a different level..
I see I am not the only one with such needs - these guys even implemented
their own "extension" to Prometheus:
https://www.lablabs.io/2020/04/19/h
maturity and more adoption of one of the options: Prometheus Operator: See:
> https://github.com/prometheus-operator/prometheus-operator
>
> Kind Regards,
> Bartek PÅ‚otka (@bwplotka)
>
>
> On Tue, 1 Sep 2020 at 13:37, klavs@gmail.com
> wrote:
>
>> Hi,
>>
My prometheus is suddenly seeing 2 pods as up == 0 - but when I look at
them - they have status Running 3/3 and Running 2/2 respectively - so they
seem just fine.
How can I debug how they end up with having the up metric set to 0?
promql query: up == 0
returns:
up{app="dynamic-gateway-service",c
Hi,
I have a pod which already has:
prometheus.io/target: 127.0.0.1:1161
But I also want the kubernetes_sd (kubernetes-jobs) scraper to scrape port
3903 on /metrics (a diff. container in the pod exposes that port).
I can't find any documentation on kubernetes_sd on how to do that - and my
goog
Hi,
I find my self wanting, to verify that I AM getting the relevant metrics
from my prometheus job, and to check that I tried to do:
count(*{job="prometheus"})
But that fails with the * not being allowed.
Using p* gets around that (but only catches metric starting with p :) - but
it then fail
You can make do with rolebinding - but you need a ClusterRole correct.
If you don't need to scrape /metrics on pods (f.ex. because you expose it
as a service on the ones you need to) - then AFAIK you could do away with
nonResourceUrls and hence only need Role.
fredag den 29. maj 2020 kl. 09.38.3
7 matches
Mail list logo