On 07/10/2022 04:09, Muthuveerappan Periyakaruppan wrote:
we have a situation , where we have 8 to 15 million head series in
each Prometheus and we have 7 instance of them (federated). Our
prometheus are in a constant flooded situation handling the incoming
metrics and back end recording rules.
8-15 million time series on a single Prometheus instance is pretty high.
What spec machine/pod are these?
When you say "flooded" what are you meaning?
One thought which came to was - do we have something similar to log
level for prometheus metrics ? If its there then... we can benefit
from it .... by configuring to run all targets in error level in
production and in debug/info level in development... This will help
control flooding of metrics.
I'm not sure what I understand what you are suggesting. What would be
the difference between setting this hypothetical "error" and "debug"
levels? Are you meaning some metrics would only be exposed on some
environments?
--
Stuart Clark
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/17f3d5ca-4369-96ee-feb9-a4bbe0bc3ca1%40Jahingo.com.