Hi Danny,

Did you try using a reflex for the target_match?
And also adding another inhibitor rule  with target_match as critical?

On Wed, Apr 15, 2020, 7:17 AM Danny de Waard <waa...@gmail.com> wrote:

> I have a alrtmanger setup and prometheus setup that are working good.
> I cab inhibit severity 'warning' when maintenance is on.
>
> But how do i also inhibit critical (and/or) info messages when maintenance
> is on.
>
> So if my maintenance_mode == 1 all warning alerts are inhibited. But i
> also want critical to be inhibited.
>
> How do i do that?
>
> prometheus rules
> groups:
> - name: targets
>   rules:
>   - alert: MaintenanceMode
>     expr: maintenance_mode == 1
>     for: 1m
>     labels:
>       severity: warning
>     annotations:
>       summary: "This is a maintenance alert"
>       description: "Fires during maintenance mode and is routed to a
> blackhole by Alertmanager"
>   - alert: monitor_service_down
>     expr: up == 0
>     for: 40s
>     labels:
>       severity: critical
>     annotations:
>       summary: "A exporter service is non-operational"
>       description: "One of the exporters on {{ $labels.instance }} is or
> was down. Check the up/down dashboard in grafana.
> http://lsrv2289.linux.rabobank.nl:3000/d/RSNFpMXZz/up-down-monitor?refresh=1m
> "
>   - alert: server_down
>     expr: probe_success == 0
>     for: 30s
>     labels:
>       severity: critical
>     annotations:
>       summary: "Server is down (no probes are up)"
>       description: "Server {{ $labels.instance }} is down."
>   - alert: loadbalancer_down
>     expr: loadbalancer_stats < 1
>     for: 30s
>     labels:
>       severity: critical
>     annotations:
>       summary: "A loadbalancer is down"
>       description: "Loadbalancer for {{ $labels.instance }} is down."
>   - alert: high_cpu_load15
>     expr: node_load15 > 4.5
>     for: 900s
>     labels:
>       severity: critical
>     annotations:
>       summary: "Server under high load (load 15m) for 15 minutes."
>       description: "Host is under high load, the avg load 15m is at {{
> $value}}. Reported by instance {{ $labels.instance }} of job {{ $labels.job
> }}."
>
>
>
>
>
>
> alertmanager.yml
>
> global:
> route:
>   group_by: [instance,severity,job]
>   receiver: 'default'
>   routes:
>    - match:
>       alertname: 'MaintenanceMode'
>      receiver: 'blackhole'
>    - match:
>       severity: warning
>       job: PAT
>      receiver: 'pat'
>    - match:
>       severity: warning
>       job: PROD
>      receiver: 'prod'
>    - match:
>       severity: critical
>       job: PAT
>      receiver: 'pat-crit'
>    - match:
>       severity: critical
>       job: PROD
>      receiver: 'prod-crit'
>      continue: true
>    - match:
>       severity: critical
>       job: PROD
>      receiver: 'sms-waard'
>    - match:
>       severity: info
>      receiver: 'info'
>    - match:
>       severity: atombomb
>      receiver: 'webhook'
> receivers:
>   - name: 'default'
>     email_configs:
>      - to: 'mailaddress' ##fill in your email
>        from: 'alertmanager_defa...@superheroes.com'
>        smarthost: 'localhost:25'
>        require_tls: false
>   - name: 'pat'
>     email_configs:
>      - to: 'mailaddress' ##fill in your email
>        from: 'alertmanager_...@superheroes.com'
>        smarthost: 'localhost:25'
>        require_tls: false
>   - name: 'prod'
>     email_configs:
>      - to: 'mailaddress, mailaddress' ##fill in your email
>        from: 'alertmanager_p...@superheroes.com'
>        smarthost: 'localhost:25'
>        require_tls: false
>   - name: 'pat-crit'
>     email_configs:
>      - to: 'mailaddress' ##fill in your email
>        from: 'critical-alertmanager_...@superheroes.com'
>        smarthost: 'localhost:25'
>        require_tls: false
>   - name: 'prod-crit'
>     email_configs:
>      - to: 'mailaddress, mailaddress' ##fill in your email
>        from: 'critical-alertmanager_p...@superheroes.com'
>        smarthost: 'localhost:25'
>        require_tls: false
>   - name: 'info'
>     email_configs:
>      - to: 'mailaddress' ##fill in your email
>        from: 'alertmanager_i...@superheroes.com'
>        smarthost: 'localhost:25'
>        require_tls: false
>   - name: 'sms-waard'
>     email_configs:
>      - to: 'mailaddress' ##fill in your email
>        from: 'alertmanager_i...@superheroes.com'
>        smarthost: 'localhost:25'
>        require_tls: false
>   - name: 'webhook'
>     webhook_configs:
>       - url: 'http://127.0.0.1:9000'
>   - name: 'blackhole'
>
> inhibit_rules:
> - source_match:
>     alertname: MaintenanceMode
>   target_match:
>     severity: warning
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/f403f910-5db8-4bd1-b632-0f3e0cb376b4%40googlegroups.com
> <https://groups.google.com/d/msgid/prometheus-users/f403f910-5db8-4bd1-b632-0f3e0cb376b4%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAKimyZszpu1JbAd%2BdB4Mie656D51HUJPGFn5BNRDDs9bOaZayw%40mail.gmail.com.

Reply via email to