[prometheus-users] Re: blackbox metrics scraping
Thank you Brian. " up to T - 5 minutes ", this 5 mins is the scraping interval? On Monday, July 25, 2022 at 4:43:33 AM UTC+8 Brian Candler wrote: > The alerting rules run on their own schedule, separately from the scraping > schedule. > > The expression "probe_success == 0" uses the value of that metric in the > prometheus TSDB *at the current instant of time*. However, the value of a > metric at any given time T is the most recent value *on or before* time T > (up to T - 5 minutes). > > On Friday, 22 July 2022 at 05:01:18 UTC+1 ninag...@gmail.com wrote: > >> Hi dears, >> >> I'm trying to define an alert rule as below, and the scrape and >> evaluation interval are 3m. >> >> >> I checked from log, blackbox will send probe every 2-3second, then the >> metrics will be generated every 2-3second, and the data will be compare >> with the alert rules to dertermine an alert will be triggered or not? >> >> I have a question if Prometheus compares the Live data from blackbox >> probe, or compares the data with Prometheus scrapped in last 3 mins? >> >> prometheus.yml: |- >> global: >> scrape_interval: 3m >> evaluation_interval: 3m >> >> - alert: EndpointDown >> expr: probe_success == 0 >> for: 5s >> labels: >> severity: critical >> annotations: >> description: Service {{ $labels.instance }} is unavailable. >> value: DOWN ({{ $value }}) >> summary: "Endpointdown {{ $labels.instance }} is down." >> > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/a894bd0f-0253-49de-91da-5a28003c0109n%40googlegroups.com.
[prometheus-users] Re: blackbox metrics scraping
The alerting rules run on their own schedule, separately from the scraping schedule. The expression "probe_success == 0" uses the value of that metric in the prometheus TSDB *at the current instant of time*. However, the value of a metric at any given time T is the most recent value *on or before* time T (up to T - 5 minutes). On Friday, 22 July 2022 at 05:01:18 UTC+1 ninag...@gmail.com wrote: > Hi dears, > > I'm trying to define an alert rule as below, and the scrape and evaluation > interval are 3m. > > > I checked from log, blackbox will send probe every 2-3second, then the > metrics will be generated every 2-3second, and the data will be compare > with the alert rules to dertermine an alert will be triggered or not? > > I have a question if Prometheus compares the Live data from blackbox > probe, or compares the data with Prometheus scrapped in last 3 mins? > > prometheus.yml: |- > global: > scrape_interval: 3m > evaluation_interval: 3m > > - alert: EndpointDown > expr: probe_success == 0 > for: 5s > labels: > severity: critical > annotations: > description: Service {{ $labels.instance }} is unavailable. > value: DOWN ({{ $value }}) > summary: "Endpointdown {{ $labels.instance }} is down." > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/cba9c7ac-e5f6-4b79-bc19-b2c7f344821fn%40googlegroups.com.
Re: [prometheus-users] deactive alert after hook
On 24/07/2022 11:10, Milad Devops wrote: hi all I use Prometheus to create alert rules and hook alerts using alertmanager. My scenario is as follows: - The log publishing service sends logs to Prometheus Exporter - Prometheus takes the logs every second and matches them with our rules - If the log applies to our rules, the alertmanager sends an alert to the frontend application. It also saves the alert in the elastic My problem is that when sending each alert, all the previous alerts are also stored in Elastic in the form of a single log and sent to my front service as a notification (web hook). Is there a way I can change the alert status to resolved after the hook so that it won't be sent again on subsequent hooks? Or delete the previous logs completely after the hook from Prometheus Or any other suggested way you have Thank you in advance I'm not sure I really understand what you are asking due to your mentioning of logs. Are you saying that you are using an exporter (for example mtail) which is consuming logs and then generating metrics? When you create an alerting rule in Prometheus it performs the PromQL query given, and if there are any results an alert is fired. Once the PromQL query stops returning results (or has a different set of time series being returned) the alert is resolved. So for example if you had a simple query that said "alert if the number of error logs [stored in a counter metric] increases by 5 or more in the last 5 minutes" as soon as the metric returned an increase of at least 5 over the last 5 minutes it would fire. It would then continue to fire until that is no longer true - so if the counter kept recording error log lines such that the increase was still over 5 per 5 minutes it would keep firing. It would only resolve once there were no more than 5 new long lines recorded over the past 5 minutes. Alertmanager just routes alerts that are generated within Prometheus to other notification/processing systems, such as email or webhooks. It would normally fire the webhook once the alert starts firing, and then periodically (if it keeps firing, at a configurable interval) and then finally (optionally) once it resolves. This is a one-way process - nothing about the notification has any impact on the alert firing or not. Only the PromQL query controls the alert. I'm not sure if that helps. -- Stuart Clark -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/040d084b-4046-6bbf-3691-5c9bedd51343%40Jahingo.com.
[prometheus-users] deactive alert after hook
hi all I use Prometheus to create alert rules and hook alerts using alertmanager. My scenario is as follows: - The log publishing service sends logs to Prometheus Exporter - Prometheus takes the logs every second and matches them with our rules - If the log applies to our rules, the alertmanager sends an alert to the frontend application. It also saves the alert in the elastic My problem is that when sending each alert, all the previous alerts are also stored in Elastic in the form of a single log and sent to my front service as a notification (web hook). Is there a way I can change the alert status to resolved after the hook so that it won't be sent again on subsequent hooks? Or delete the previous logs completely after the hook from Prometheus Or any other suggested way you have Thank you in advance -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/c7f6f592-8e6c-41d7-bd14-8e9baf55b682n%40googlegroups.com.
[prometheus-users] Re: Promql
Try this as a starting point: some_metric * scalar(hour() < bool 12) On Friday, 22 July 2022 at 19:42:47 UTC+1 hamidd...@gmail.com wrote: > Hi everyone > i m looking for a formula that can make a query just for nights for > example from 12 pm to 12 am. > can anyone help me, > cheers. > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/6dab4844-cb9e-4a91-97b8-5da3d3beeae1n%40googlegroups.com.