Re: [prometheus-users] targetmissing alert is fooled by two endpoints on one job
in our case we had indeed identical 'instance' labels i believe, because we had relabeling rules in place, rewriting it from the ip-address:port style prometheus does as default to the hostname. we now changed it back, so the instance label is the unique ip:port style, and introduced a hostname label; (just for documentation, hope it helps someone) cheers, fil On Friday, February 17, 2023 at 6:41:27 PM UTC+1 Stuart Clark wrote: > On 17/02/2023 16:16, Mario Cornaccini wrote: > > hi, > > > > i have a job 'node-tools'ansible', > > with two endpoints.. one for each node- and shell exporter. > > > > in prometheus/targets i see the same set of labels for each endpoint. > > > > for testing i stoppped the node exporter, but the alert is based on > > the following expr: > > up{} == 0 > > > > on the graph i can see that the up{}==0 expr has value 0 for a few > > seconds, then gaps; > > when i remove the ==0 i can see it goes from 0 to 1. > > > > so it seems to me that the other (shell) exporter mixes into the up > > metric; > > and that is because the endpoint have the same labels, right ? > > > > so in my scrape defintion i need to specify one differing label for > > the shell exporter.. > > i could make an exporter label, setting it to 'node'/'shell' i guess.. > > or how do you guys handle that? > > > They can't have identical labels. Even if the job label is the same the > instance label should be different. > > -- > Stuart Clark > > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/e9a0e2c1-7388-4593-a5ec-908de2e1ad34n%40googlegroups.com.
Re: [prometheus-users] targetmissing alert is fooled by two endpoints on one job
On 17/02/2023 16:16, Mario Cornaccini wrote: hi, i have a job 'node-tools'ansible', with two endpoints.. one for each node- and shell exporter. in prometheus/targets i see the same set of labels for each endpoint. for testing i stoppped the node exporter, but the alert is based on the following expr: up{} == 0 on the graph i can see that the up{}==0 expr has value 0 for a few seconds, then gaps; when i remove the ==0 i can see it goes from 0 to 1. so it seems to me that the other (shell) exporter mixes into the up metric; and that is because the endpoint have the same labels, right ? so in my scrape defintion i need to specify one differing label for the shell exporter.. i could make an exporter label, setting it to 'node'/'shell' i guess.. or how do you guys handle that? They can't have identical labels. Even if the job label is the same the instance label should be different. -- Stuart Clark -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/9c013f7c-ab3f-b33b-f972-d5507e2536ad%40Jahingo.com.
[prometheus-users] targetmissing alert is fooled by two endpoints on one job
hi, i have a job 'node-tools'ansible', with two endpoints.. one for each node- and shell exporter. in prometheus/targets i see the same set of labels for each endpoint. for testing i stoppped the node exporter, but the alert is based on the following expr: up{} == 0 on the graph i can see that the up{}==0 expr has value 0 for a few seconds, then gaps; when i remove the ==0 i can see it goes from 0 to 1. so it seems to me that the other (shell) exporter mixes into the up metric; and that is because the endpoint have the same labels, right ? so in my scrape defintion i need to specify one differing label for the shell exporter.. i could make an exporter label, setting it to 'node'/'shell' i guess.. or how do you guys handle that? TIA; cheers, fil -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/837281e4-1ab7-4931-8795-b5b4a52aa378n%40googlegroups.com.