As Julius said, just dropping the id label may break things - if you have 
multiple timeseries which are distinguished by the id label alone, then 
dropping the id will result in multiple clashing metrics.

Maybe you want to combine multiple metrics into a single metric?  Showing 
examples of the *actual* metrics may be able to show what's going on - 
where the id is coming from, and what might be an appropriate aggregation 
to do.

On Monday, 4 April 2022 at 16:22:21 UTC+1 [email protected] wrote:

> The way I test this is by looking at the TSDB status on the web GUI and 
> looking for the ID label in the tables with the labels with the highest 
> cardinality. I keep on finding the ID label there, with its cardinality 
> unchanged from before the drop.  I expect to not find it, or to find it 
> with a much lower cardinality.
>
> Please suggest a more targeted way of testing this.
>
>
>
> On Monday, April 4, 2022 at 5:42:02 PM UTC+3 Brian Candler wrote:
>
>> There are 17 scrape jobs in that config.  Can you show an example of a 
>> metric that has the problem? Then the "job" label will identify which job 
>> it came from, and we'll also see the format of the "id" label you are 
>> concerned about.
>>
>> I don't see any "target_label: id" in there, and I'm pretty sure 
>> kubernetes_sd_config doesn't add an "id" label, so it seems likely that 
>> it's coming from the exporter as part of the scrape.
>>
>> On Monday, 4 April 2022 at 10:08:44 UTC+1 [email protected] wrote:
>>
>>> OK, I stand corrected.
>>>
>>> Here is the entire config file.
>>>
>>>
>>> On Monday, April 4, 2022 at 10:20:00 AM UTC+3 Brian Candler wrote:
>>>
>>>> That's incorrect.  The only "instrinsic" labels for prometheus are the 
>>>> "instance" and "job" labels, which are added at scrape time. (The 
>>>> "instance" label is copied from "__address__", but only if "instance" has 
>>>> not already been set)
>>>>
>>>> So your "id" label is *either* coming from your service discovery 
>>>> <https://prometheus.io/docs/prometheus/latest/configuration/configuration/>
>>>>  
>>>> mechanism (i.e. added *prior* to scrape, in which case you need to use 
>>>> "relabel_configs" to modify it), or it is coming from the exporter (in 
>>>> which case use "metric_relabel_configs").
>>>>
>>>> It's easy to tell which: perform a manual scrape using "curl", and see 
>>>> if the "id" label is present in the metrics returned.
>>>>
>>>> If you show the configuration of the job in prometheus.yml, it will be 
>>>> clear which SD mechanism you're using and that may explain the issue.
>>>>
>>>> On Monday, 4 April 2022 at 07:36:08 UTC+1 [email protected] wrote:
>>>>
>>>>> It is my understanding that this ID label is a built-in, "intrinsic" 
>>>>> label for Prometheus. It is not a target label. 
>>>>>
>>>>> On Sunday, April 3, 2022 at 9:47:12 PM UTC+3 [email protected] 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Is the label really a label coming directly from the instrumentation, 
>>>>>> that is, the target's /metrics page, or is it a target label (in which 
>>>>>> case 
>>>>>> you'll want to use relabel_configs, not metric_relabel_configs)? What 
>>>>>> kind 
>>>>>> of ID is it?
>>>>>>
>>>>>> Regards,
>>>>>> Julius
>>>>>>
>>>>>> On Sun, Apr 3, 2022 at 8:33 PM GI D <[email protected]> wrote:
>>>>>>
>>>>>>> Thanks for answering,
>>>>>>> Following what you wrote, I now have the following multiple times in 
>>>>>>> my config:
>>>>>>>
>>>>>>> metric_relabel_configs:
>>>>>>>    - separator: ; 
>>>>>>>      regex: id 
>>>>>>>      replacement: $1 
>>>>>>>     action: labeldrop
>>>>>>>
>>>>>>> However, and after restarting the Prometheus stateful set,  in my 
>>>>>>> TSDB status the ID label is still there, with the same cardinality as 
>>>>>>> before, in the table titled: "Top 10 label names with high memory 
>>>>>>> usage"
>>>>>>>
>>>>>>> Please advise.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Friday, April 1, 2022 at 10:22:26 PM UTC+3 [email protected] 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> To drop a label from a series during the scrape, you need the 
>>>>>>>> "labeldrop" action ("drop" drops the entire series), see 
>>>>>>>> https://training.promlabs.com/training/relabeling/writing-relabeling-rules/keeping-and-dropping-labels
>>>>>>>>
>>>>>>>> However, just dropping a label that is required to distinguish 
>>>>>>>> series from each other will cause you problems if that results in 
>>>>>>>> multiple 
>>>>>>>> time series now having the same labelset identity after the 
>>>>>>>> relabeling. 
>>>>>>>> Maybe your intent is to aggregate over multiple series instead?
>>>>>>>>
>>>>>>>> On Fri, Apr 1, 2022 at 7:02 PM GI D <[email protected]> wrote:
>>>>>>>>
>>>>>>>>> <https://stackoverflow.com/posts/71690642/timeline>
>>>>>>>>>
>>>>>>>>> Running Prometheus on K8s v1.20 on AWS/EKS, In my v33 Helm chart I 
>>>>>>>>> need to drop the ID label in order to reduce the TSDB size. According 
>>>>>>>>> to 
>>>>>>>>> this article 
>>>>>>>>> <https://grafana.com/blog/2022/03/21/how-relabeling-in-prometheus-works/>,
>>>>>>>>>  
>>>>>>>>> this can be done with Metric Relabelings. So in all relevant sections 
>>>>>>>>> of 
>>>>>>>>> the values file I have the following:
>>>>>>>>> metricRelabelings: 
>>>>>>>>>   - sourceLabels: [id] 
>>>>>>>>>      action: "drop" 
>>>>>>>>>
>>>>>>>>> In the resulting Prometheus config that I can see in the web GUI 
>>>>>>>>> on port 9090, this gets translated (again in all relevant sections) 
>>>>>>>>> as 
>>>>>>>>> follows:
>>>>>>>>> metric_relabel_configs: 
>>>>>>>>>   - source_labels: [id]       
>>>>>>>>>        separator: ; 
>>>>>>>>>        regex: (.*) 
>>>>>>>>>        replacement: $1 
>>>>>>>>>        action: drop 
>>>>>>>>>
>>>>>>>>> However in the TSDB status in the GUI the "id" label is still 
>>>>>>>>> there, with the same cardinality as before the attempt to drop it.
>>>>>>>>>
>>>>>>>>> What am I missing?
>>>>>>>>>
>>>>>>>>> -- 
>>>>>>>>> You received this message because you are subscribed to the Google 
>>>>>>>>> Groups "Prometheus Users" group.
>>>>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>>>>> send an email to [email protected].
>>>>>>>>> To view this discussion on the web visit 
>>>>>>>>> https://groups.google.com/d/msgid/prometheus-users/06058e34-00e4-4034-a572-bb45098f7d3en%40googlegroups.com
>>>>>>>>>  
>>>>>>>>> <https://groups.google.com/d/msgid/prometheus-users/06058e34-00e4-4034-a572-bb45098f7d3en%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>>>>> .
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> -- 
>>>>>>>> Julius Volz
>>>>>>>> PromLabs - promlabs.com
>>>>>>>>
>>>>>>> -- 
>>>>>>> You received this message because you are subscribed to the Google 
>>>>>>> Groups "Prometheus Users" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>>> send an email to [email protected].
>>>>>>>
>>>>>> To view this discussion on the web visit 
>>>>>>> https://groups.google.com/d/msgid/prometheus-users/8d45baf5-e25c-474d-8f7a-b30a6035a48cn%40googlegroups.com
>>>>>>>  
>>>>>>> <https://groups.google.com/d/msgid/prometheus-users/8d45baf5-e25c-474d-8f7a-b30a6035a48cn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>>> .
>>>>>>>
>>>>>>
>>>>>>
>>>>>> -- 
>>>>>> Julius Volz
>>>>>> PromLabs - promlabs.com
>>>>>>
>>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/e4336f62-92bd-4a76-a89c-7c6d25e41eacn%40googlegroups.com.

Reply via email to