[prometheus-users] Requests duration optimizing
Hi all! I have to optimize request duration. As you can guess for that I need to how it was before optimization and how it goes now. At this moment I divide *classification_request_duration_seconds_sum *on *classification_request_duration_seconds_count:* classification_request_duration_seconds_sum / classification_request_duration_seconds_count I'm here to ask if it is proper way for my task or there is maybe another (better) way? Buckets example: classification_request_duration_seconds_bucket{app_name="starlette",le="0.005",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.01",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.025",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.05",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.075",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.1",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.25",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.5",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.75",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="1.0",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="2.5",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="5.0",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="7.5",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="10.0",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="+Inf",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_count{app_name="starlette",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_sum{app_name="starlette",method="GET",path="/",status_code="200"} 1.11951099875 Thanks in advance. -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/f1800760-53f0-4510-b4a1-f05cb0211511n%40googlegroups.com.
[prometheus-users] Requests duration optimizing
Hi all! I have to optimize request duration. As you can guess for that I need to how it was before optimization and how it goes now. At this moment I divide *classification_request_duration_seconds_sum *on *classification_request_duration_seconds_count:* aimee_classification_request_duration_seconds_sum / aimee_classification_request_duration_seconds_count I'm here to ask if it is proper way for my task or there is maybe another (better) way? Buckets example: classification_request_duration_seconds_bucket{app_name="starlette",le="0.005",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.01",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.025",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.05",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.075",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.1",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.25",method="GET",path="/",status_code="200"} 0.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.5",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="0.75",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="1.0",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="2.5",method="GET",path="/",status_code="200"} 3.0 aimee_classification_request_duration_seconds_bucket{app_name="starlette",le="5.0",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="7.5",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="10.0",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_bucket{app_name="starlette",le="+Inf",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_count{app_name="starlette",method="GET",path="/",status_code="200"} 3.0 classification_request_duration_seconds_sum{app_name="starlette",method="GET",path="/",status_code="200"} 1.11951099875 Thanks in advance. -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/879a0046-5f7c-4ac1-8430-bf6a9b2495edn%40googlegroups.com.
Re: [prometheus-users] Query with diivision
I suspected there is something wrong with labels. Thanks for your answer! That worked понедельник, 6 июля 2020 г., 11:19:12 UTC+3 пользователь Aliaksandr Valialkin написал: > > Try the following query: > > (rules_job_count{cluster="loco-prod", status="failed"} + ignoring(status) > rules_job_count{cluster="loco-prod", status="cancelled"}) / > ignoring(status) rules_job_count{cluster="loco-prod", status="finished"} > > It instructs Prometheus to ignore the `status` label when performing the > addition and division operations. See more details about this at > https://prometheus.io/docs/prometheus/latest/querying/operators/#vector-matching > > > On Mon, Jul 6, 2020 at 10:48 AM Альберт Александров > wrote: > >> >> Hi all! >> >> >> Have such metrics: >> >> >> [image: photo_2020-07-06_10-30-12.jpg] >> >> I would like to query: >> >> (rules_job_count{cluster="loco-prod", status="failed"} + >>> rules_job_count{cluster="loco-prod", status="cancelled"}) / >>> rules_job_count{cluster="loco-prod", status="finished"} >> >> >> But this didn't work. At the same time this query works: >> >> rules_job_count{cluster="loco-prod", status="failed"} + >>> rules_job_count{cluster="loco-prod", status="failed"} >> >> >> Could you say please how to make the first query work? >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Prometheus Users" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to promethe...@googlegroups.com . >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/prometheus-users/ac417564-df12-4627-8c09-2538c759a7c7o%40googlegroups.com >> >> <https://groups.google.com/d/msgid/prometheus-users/ac417564-df12-4627-8c09-2538c759a7c7o%40googlegroups.com?utm_medium=email_source=footer> >> . >> > > > -- > Best Regards, > > Aliaksandr Valialkin, CTO VictoriaMetrics > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/c55aed6d-c578-4168-a7c3-80295e7be160o%40googlegroups.com.
[prometheus-users] Query with diivision
Hi all! Have such metrics: [image: photo_2020-07-06_10-30-12.jpg] I would like to query: (rules_job_count{cluster="loco-prod", status="failed"} + > rules_job_count{cluster="loco-prod", status="cancelled"}) / > rules_job_count{cluster="loco-prod", status="finished"} But this didn't work. At the same time this query works: rules_job_count{cluster="loco-prod", status="failed"} + > rules_job_count{cluster="loco-prod", status="failed"} Could you say please how to make the first query work? -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/ac417564-df12-4627-8c09-2538c759a7c7o%40googlegroups.com.
Re: [prometheus-users] count_over_time
You did my day. Thanx a lot) четверг, 2 июля 2020 г., 16:20:21 UTC+3 пользователь Mat Arye написал: > > Hi Albert, > > I believe you need an additional layer of aggregation to combine series > with different gauage_index labels ; something like this: > > sum without (gauge_index) ( > > > count_over_time(platform_asusg_send_status{cluster="clover-test-selectel"}[5m]) > ) > > On Thu, Jul 2, 2020 at 8:54 AM Альберт Александров > wrote: > >> Hi all! >> >> >> I have such individual metrics: >> >> >> [image: photo_2020-07-02_15-37-48.jpg] >> >> As you can see they differ from each other with *gauge_index* label. >> >> I would like to count them over time. I tried this: >> >> >>> count_over_time(platform_asusg_send_status{cluster="clover-test-selectel"}[5m]) >> >> >> But this returns as much graphs as the amount of gauge_index. I mean I >> would like to get a single graph as if there was no gauge_index label. >> >> Could you say please how can I reach this? >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Prometheus Users" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to promethe...@googlegroups.com . >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/prometheus-users/cca51c6a-8363-40a7-84de-320fe396d9dbo%40googlegroups.com >> >> <https://groups.google.com/d/msgid/prometheus-users/cca51c6a-8363-40a7-84de-320fe396d9dbo%40googlegroups.com?utm_medium=email_source=footer> >> . >> > > > -- > Mat Arye, Timescale-Prometheus > <https://github.com/timescale/timescale-prometheus> Team Lead > See what we're working on (feedback welcome!): tsdb.co/prom-design-doc > > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/2875607d-e649-47ce-99d4-41414c34479do%40googlegroups.com.
[prometheus-users] count_over_time
Hi all! I have such individual metrics: [image: photo_2020-07-02_15-37-48.jpg] As you can see they differ from each other with *gauge_index* label. I would like to count them over time. I tried this: count_over_time(platform_asusg_send_status{cluster="clover-test-selectel"}[5m]) But this returns as much graphs as the amount of gauge_index. I mean I would like to get a single graph as if there was no gauge_index label. Could you say please how can I reach this? -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/cca51c6a-8363-40a7-84de-320fe396d9dbo%40googlegroups.com.