If you refactor the rules a bit, you may find them easier to maintain:
alert1:
expr: probe_success{somelabel="XYZ"} == 0
labels:
someswitch: foo
alert2:
expr: probe_success{somelabel="ABC"} == 0
labels:
someswitch: bar
alert3:
expr: |
probe_success == 0
unless probe_suc
If your special cases have a *longer* "for" duration than the general ones,
then I guess they won't be useful for inhibiting the general ones, since
the special cases will start firing too late relative to the general ones
to inhibit them. I guess you could introduce a copy of each special case
ale
hi,
for the same metric, i want to have multiple rules in alert manager, to
have longer FOR: times for some special cases.
the way i do this now is:
alert1 # general
probe_succes{ somelabel !~"specialcase1|specialcase2"}
alert2 # special
probe_success{somelabel =~"specialcase1|specialcase2"}
..
Hi,
Looking for some general advice about shared prom alert rules between
regions. We currently push the same alert rules to all regions, and
sometimes we run into situations where we have a specific job in region X
but not Y.
This is fine for basic cases, such as *up{job="jenkins"} == 0* whi
Hi,
I'm new in Prometheus. Now I try configure alerts and have problem.
Have 2 rules: alert.rules1.yml and alert.rules2.yml
*alert.rules1.yml*
*groups:*
* - name: iDrac*
*rules:*
*# ICMP ##*
*- alert: host_is_not_available_via_icmp*
* expr: probe_success{job="idrac-
I am trying to do scaling operation with Prometheus Alert Manager. I have
configured a webhook receiver in Alert Manager to execute my ScaleUp and
ScaleDown action.I have the following things set in configuration:
evaluation_interval: 1m
scrape_interval: 1s
There is no grouping in Alert Manager.B
6 matches
Mail list logo