It's not clear what your requirements are. Is it possible that values
after the double-dash might themselves contain a single dash, like
mykub-test--another-test--test-123
and if so, what should happen?
Could there be additional double-dash separated sections like
Wonder if I can get some help. We are seeing an oddity here. Within my
configuration, we are using kubernetes_sd_configs and I have the following
in the relabel section of my scrape config:
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (^mykub-test--[^-]*)
Total number of time series scraped would be more important I think, so you
also need to know how many targets you'll have.
I had Prometheus servers scraping 20-30M time series total and that was
eating pretty much all memory on server with 256GB ram.
In general when doing capacity planning we
Telco system, that creates series per Access Point Name (APN) name as label
value (similar to a user-id or a pod-IP but very much important to record)
/Teja
On Tuesday, June 14, 2022 at 1:52:19 PM UTC+2 Stuart Clark wrote:
> On 14/06/2022 12:32, tejaswini vadlamudi wrote:
> > Thanks Stuart,
On 14/06/2022 12:32, tejaswini vadlamudi wrote:
Thanks Stuart, this expectation is coming as a legacy requirement for
a Telco cloud-native application that has huge cardinality (3000
values for a label-1) and little dimensionality (2 labels) for 300
metrics.
Is there any recommendation like
Thanks Stuart, this expectation is coming as a legacy requirement for a
Telco cloud-native application that has huge cardinality (3000 values for a
label-1) and little dimensionality (2 labels) for 300 metrics.
Is there any recommendation like not more than 1k or 10k series per
endpoint?
Br,
On 14/06/2022 12:13, tejaswini vadlamudi wrote:
I have a use case where a particular service (that can be horizontally
scaled to desired replica count) exposes a 2 Million time series.
Prometheus might expect huge resources to scrape such service (this is
normal). But I'm not sure if there is
I have a use case where a particular service (that can be horizontally
scaled to desired replica count) exposes a 2 Million time series.
Prometheus might expect huge resources to scrape such service (this is
normal). But I'm not sure if there is a recommendation from the community
on
How to add an alert to the grafana dashboard when the version of API is
about to expire?
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
9 matches
Mail list logo