On 17/02/2021 18:48, Johny wrote:
Is there a limit on the number of time series per metric (cardinality)? My metric has cardinality of 100,000 with a tag for each process in my infrastructure. I was wondering if this causes performance or other issues. I couldn't find official guidance on this. --

The limit is down to the hardware that Prometheus is running on. The more time series (the total number of different label combinations in use for every metric) the more memory you would need. A cardinality of 100k for a single metric is pretty large. With only a few such metrics you'd quickly be in the millions of time series, which would have pretty substantial infrastructure requirements.

For larger Prometheus setups you would generally try to avoid having a single large central server. Instead you would look to have a Prometheus for every failure domain (e.g. different datacenters or AWS regions) as well as different services/applications/areas (whatever makes sense based on organisation or technical structures).

You can then use tools such as federation or a remote read/write system (such as Thanos) to construct global views and alerts if needed.

--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/eb331fb8-8ff0-7e27-a1f4-a291194752ac%40Jahingo.com.

Reply via email to