Re: [prometheus-users] overriding alert levels?

2020-12-14 Thread klavs....@gmail.com
I have not.. I'm working on trying to see if it can work. Thank you I still believe Prometheus would benefit from official support for this - like what I linked to, others built. mandag den 14. december 2020 kl. 16.06.35 UTC+1 skrev Christian Hoffmann: > Hi, > > On 2020-12-1

[prometheus-users] overriding alert levels?

2020-12-14 Thread klavs....@gmail.com
Hi everyone, I would like to adjust alert levels for f.ex. disk space - so hosts that match som tag (like environment: dev) has a different level.. I see I am not the only one with such needs - these guys even implemented their own "extension" to Prometheus: https://www.lablabs.io/2020/04/19/h

Re: [prometheus-users] Annotations to scrape multiple containers in same pod?

2020-09-02 Thread klavs....@gmail.com
maturity and more adoption of one of the options: Prometheus Operator: See: > https://github.com/prometheus-operator/prometheus-operator > > Kind Regards, > Bartek PÅ‚otka (@bwplotka) > > > On Tue, 1 Sep 2020 at 13:37, klavs@gmail.com > wrote: > >> Hi, >>

[prometheus-users] up wrongly set to 0 on 2 pods

2020-09-02 Thread klavs....@gmail.com
My prometheus is suddenly seeing 2 pods as up == 0 - but when I look at them - they have status Running 3/3 and Running 2/2 respectively - so they seem just fine. How can I debug how they end up with having the up metric set to 0? promql query: up == 0 returns: up{app="dynamic-gateway-service",c

[prometheus-users] Annotations to scrape multiple containers in same pod?

2020-09-01 Thread klavs....@gmail.com
Hi, I have a pod which already has: prometheus.io/target: 127.0.0.1:1161 But I also want the kubernetes_sd (kubernetes-jobs) scraper to scrape port 3903 on /metrics (a diff. container in the pod exposes that port). I can't find any documentation on kubernetes_sd on how to do that - and my goog

[prometheus-users] Counting all metrics with same label {job="mycustomjob"} ?

2020-08-17 Thread klavs....@gmail.com
Hi, I find my self wanting, to verify that I AM getting the relevant metrics from my prometheus job, and to check that I tried to do: count(*{job="prometheus"}) But that fails with the * not being allowed. Using p* gets around that (but only catches metric starting with p :) - but it then fail

[prometheus-users] Re: Restricting Prometheus to a particular Namespace

2020-08-17 Thread klavs....@gmail.com
You can make do with rolebinding - but you need a ClusterRole correct. If you don't need to scrape /metrics on pods (f.ex. because you expose it as a service on the ones you need to) - then AFAIK you could do away with nonResourceUrls and hence only need Role. fredag den 29. maj 2020 kl. 09.38.3