The standard approach for larger setups is to start sharding Prometheus. In
Kubernetes it's common to have a Prometheus-per-namespace.
You may also want to look into how many metrics each of your pods is
exposing. 20GB of memory indicates that you probably have over 1M
prometheus_tsdb_head_series
There's a calculator here:
https://www.robustperception.io/how-much-ram-does-prometheus-2-x-need-for-cardinality-and-ingestion
You can see from this how much difference increasing the scrape interval
would make.
--
You received this message because you are subscribed to the Google Groups
On 2020-06-17 08:34, Tomer Leibovich wrote:
I’m using Prometheus-Operator in my cluster and encountered an issue
with Prometheus pod that consumed 20GB RAM when my cluster grew and
consisted of 400 pods, eventually Prometheus chocked the server and I
had to terminate it.
How much memory should I
I’m using Prometheus-Operator in my cluster and encountered an issue with
Prometheus pod that consumed 20GB RAM when my cluster grew and consisted of 400
pods, eventually Prometheus chocked the server and I had to terminate it.
How much memory should I allocate to the pod in order to keep it
4 matches
Mail list logo