Re: [prometheus-users] Calculate cluster uptime percentage

2022-07-19 Thread Shubham Shrivastav
Thanks!

But *avg_over_time(platform_uptime_state[1h]) * 100*  gives me uptime for 
one node. 

I need to check uptime for the cluster (two nodes). Clustered nodes have 
the same environment_id label.

I use *sum by (environment_id) (platform_uptime_state)* to track the number 
of nodes connected.

I thought I could setup a formula like:

*cluster_uptime % = 1 - ( total seconds when both nodes were down / total 
seconds both nodes were connected (up/down) ) * 100 *

Is that possible?

Also, I have some custom metrics coming in: https://pastebin.com/sdFcNucA

The cluster is expected to be up when at least one of these nodes is up. 


On Tuesday, 19 July 2022 at 20:51:58 UTC-7 sup...@gmail.com wrote:

> avg_over_time(platform_uptime_state[1h]) * 100
>
> On Wed, Jul 20, 2022 at 4:55 AM Shubham Shrivastav  
> wrote:
>
>> Hi all,
>>
>> I have a two-node cluster that I'm trying to monitor.
>>
>> We send a custom metric on individual nodes
>> # HELP platform_uptime_state  Overall platform status is 1 when up, 0 
>> otherwise
>> # TYPE platform_uptime_state gauge
>> platform_uptime_state 1
>>
>> The cluster is expected to be UP when at least one of the nodes has 
>> platform_uptime_state set to 1.
>>
>> I need to calculate the cluster uptime percent since the cluster was 
>> started, but I'm not able to formulate a query.
>>
>>
>>
>> Any help is appreciated!
>>
>> TIA,
>> Shubham
>>
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to prometheus-use...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-users/66466c99-6b1f-4a77-9ba2-b93a58bf6969n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/a11512d7-0f92-4192-b590-8336565b3449n%40googlegroups.com.


[prometheus-users] Display pod distribution by instance type

2022-07-19 Thread RB
Hello, we have pods running on several different ec2 instance types and 
would like a way to display the distribution of these pods by their 
instance type in a dashboard.  Any help or advice on accomplishing this 
would be greatly appreciated.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/4909ea60-4ee4-4afb-9d1e-fdc513210ea3n%40googlegroups.com.


Re: [prometheus-users] Calculate cluster uptime percentage

2022-07-19 Thread Ben Kochie
avg_over_time(platform_uptime_state[1h]) * 100

On Wed, Jul 20, 2022 at 4:55 AM Shubham Shrivastav <
shrivastavshubha...@gmail.com> wrote:

> Hi all,
>
> I have a two-node cluster that I'm trying to monitor.
>
> We send a custom metric on individual nodes
> # HELP platform_uptime_state  Overall platform status is 1 when up, 0
> otherwise
> # TYPE platform_uptime_state gauge
> platform_uptime_state 1
>
> The cluster is expected to be UP when at least one of the nodes has
> platform_uptime_state set to 1.
>
> I need to calculate the cluster uptime percent since the cluster was
> started, but I'm not able to formulate a query.
>
>
>
> Any help is appreciated!
>
> TIA,
> Shubham
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/66466c99-6b1f-4a77-9ba2-b93a58bf6969n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CABbyFmq3nNhtd%2B-0hozdWUcz3jKw21xW3muV6M9bt8tEnF1umQ%40mail.gmail.com.


[prometheus-users] Calculate cluster uptime percentage

2022-07-19 Thread Shubham Shrivastav
Hi all,

I have a two-node cluster that I'm trying to monitor.

We send a custom metric on individual nodes
# HELP platform_uptime_state  Overall platform status is 1 when up, 0 
otherwise
# TYPE platform_uptime_state gauge
platform_uptime_state 1

The cluster is expected to be UP when at least one of the nodes has 
platform_uptime_state set to 1.

I need to calculate the cluster uptime percent since the cluster was 
started, but I'm not able to formulate a query.



Any help is appreciated!

TIA,
Shubham



-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/66466c99-6b1f-4a77-9ba2-b93a58bf6969n%40googlegroups.com.


Re: [prometheus-users] Re: Https issue when using prometheus federation

2022-07-19 Thread Shi Yan
oh, sorry, you mean all the metrics. It's about several seconds

real0m7.361s
user0m0.130s
sys0m0.999s

On Wednesday, July 20, 2022 at 1:03:36 AM UTC+10 Stuart Clark wrote:

> On 19/07/2022 14:51, Shi Yan wrote:
> > Thanks, Brian for helping look into it.
> >
> > Yes, in our setup, `another_prom_server` is deployed on the k8s 
> > cluster and it is behind an F5 ingress proxy, which terminates the TLS 
> > protocol. So we use HTTPS here.
> > And I've tried to add port 443 explicitly in the targets config, but 
> > the error is still the same.
> >
> > msg="Scrape failed" err="Get 
> > \"
> https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D\ 
> ": 
>
> > read tcp x.x.x.x:58342->y.y.y.y:443: read: connection reset by peer"
> >
> > While I can manually curl it with either
> >  > curl https://example.com
> >  Found
> >
> > or the one with the exact URL parameters from the error msg.
> >  > curl 
> > 'https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D'
> >  .# can get all the metrics correctly
> >
> How long does it take curl to respond with all the metrics?
>
> Could it be that it takes a while and your load balancer is configured 
> with a shorter timeout?
>
> -- 
> Stuart Clark
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/ad51764e-f32a-488e-b84a-977516177f70n%40googlegroups.com.


Re: [prometheus-users] Re: Https issue when using prometheus federation

2022-07-19 Thread Shi Yan
Hi Stuart

The curl actually gets a response immediately,  as we only need to get a 
small set of metrics.

The curl command time is as follows
real0m0.077s
user0m0.016s
sys0m0.018s

On Wednesday, July 20, 2022 at 1:03:36 AM UTC+10 Stuart Clark wrote:

> On 19/07/2022 14:51, Shi Yan wrote:
> > Thanks, Brian for helping look into it.
> >
> > Yes, in our setup, `another_prom_server` is deployed on the k8s 
> > cluster and it is behind an F5 ingress proxy, which terminates the TLS 
> > protocol. So we use HTTPS here.
> > And I've tried to add port 443 explicitly in the targets config, but 
> > the error is still the same.
> >
> > msg="Scrape failed" err="Get 
> > \"
> https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D\ 
> ": 
>
> > read tcp x.x.x.x:58342->y.y.y.y:443: read: connection reset by peer"
> >
> > While I can manually curl it with either
> >  > curl https://example.com
> >  Found
> >
> > or the one with the exact URL parameters from the error msg.
> >  > curl 
> > 'https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D'
> >  .# can get all the metrics correctly
> >
> How long does it take curl to respond with all the metrics?
>
> Could it be that it takes a while and your load balancer is configured 
> with a shorter timeout?
>
> -- 
> Stuart Clark
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/b102271a-bd72-4595-9a7a-16c2b8d14040n%40googlegroups.com.


Re: [prometheus-users] Re: Https issue when using prometheus federation

2022-07-19 Thread Shi Yan
Thanks Julien, but adding enable_http2: false doesn't help. Still same 
error.

On Wednesday, July 20, 2022 at 12:20:50 AM UTC+10 Julien Pivotto wrote:

> Could you try adding http2_enable: false and see if there is an 
> improvement?
>
> Le mar. 19 juil. 2022, 15:51, Shi Yan  a écrit :
>
>> Thanks, Brian for helping look into it.
>>
>> Yes, in our setup, `another_prom_server` is deployed on the k8s cluster 
>> and it is behind an F5 ingress proxy, which terminates the TLS protocol. So 
>> we use HTTPS here.
>> And I've tried to add port 443 explicitly in the targets config, but the 
>> error is still the same.
>>
>> msg="Scrape failed" err="Get \"
>> https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D\ 
>> ":
>>  
>> read tcp x.x.x.x:58342->y.y.y.y:443: read: connection reset by peer"
>>
>> While I can manually curl it with either
>>  > curl https://example.com
>>  Found
>>
>> or the one with the exact URL parameters from the error msg.
>>  > curl '
>> https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D'
>>  .# can get all the metrics correctly
>>
>>
>> Cheers
>>  
>>
>>
>> On Tuesday, July 19, 2022 at 6:25:00 PM UTC+10 Brian Candler wrote:
>>
>>> Can you show the exact curl command line, with just the hostname 
>>> replaced with "example.com" ?
>>>
>>> Try:
>>>
>>>   - targets:
>>> - another_prom_server:9090
>>>
>>> or
>>>
>>>   - targets:
>>> - another_prom_server:443
>>>
>>> or whatever is appropriate.  (I note you set "scheme: https" - is that 
>>> correct? Is this prometheus running behind a reverse proxy or ingress 
>>> proxy, or configured with web.config to serve TLS?)
>>>
>>> On Tuesday, 19 July 2022 at 06:13:38 UTC+1 yansh...@gmail.com wrote:
>>>
 I am trying to configure the prometheus federation, but the target is 
 not up and the only error I can see is `read: connection reset by peer`

 The scrape_config I've added is as follows:

 - job_name: federate
   scrape_interval: 30s
   scrape_timeout: 15s
   scheme: https
   honor_labels: true
   metrics_path: "/federate"
   params:
 match[]:
 - '{job="jobname"}'
   static_configs:
   - targets:
 - another_prom_server

 But if I use `curl` command from this central prometheus server, it 
 works and can return the metrics correctly. 

 another_prom_server is the one deployed by kube-prometheus-stack helm 
 chart. Not sure what is the issue here? Could anyone help advise, thanks! 

>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to prometheus-use...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-users/9304c742-68a9-427d-8449-ec6cf83e0f9an%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/ab3a7596-4545-49a1-b742-3c6a6efa9d5fn%40googlegroups.com.


Re: [prometheus-users] Re: Https issue when using prometheus federation

2022-07-19 Thread Stuart Clark

On 19/07/2022 14:51, Shi Yan wrote:

Thanks, Brian for helping look into it.

Yes, in our setup, `another_prom_server` is deployed on the k8s 
cluster and it is behind an F5 ingress proxy, which terminates the TLS 
protocol. So we use HTTPS here.
And I've tried to add port 443 explicitly in the targets config, but 
the error is still the same.


msg="Scrape failed" err="Get 
\"https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D\": 
read tcp x.x.x.x:58342->y.y.y.y:443: read: connection reset by peer"


While I can manually curl it with either
 > curl https://example.com
 Found

or the one with the exact URL parameters from the error msg.
 > curl 
'https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D'

 .# can get all the metrics correctly


How long does it take curl to respond with all the metrics?

Could it be that it takes a while and your load balancer is configured 
with a shorter timeout?


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/e0c701db-9c73-becb-c9a6-743f6c7384b3%40Jahingo.com.


Re: [prometheus-users] Use remote-write instead of federation

2022-07-19 Thread Stuart Clark

On 19/07/2022 13:24, tejaswini vadlamudi wrote:
@Ben: Makes a point, but getting Thanos or Cortex into the picture 
could be a way forward after some time. For now, do you think it is 
good enough to use remote-write instead of federation?  From a 
performance and resource consumption POV, do you see remote-write as 
the way-forward?


With remote write you could use agent mode, so you don't have to have 
local storage other than for the destination instance.


However again it depends what you are trying to achieve and why you have 
suggested having four instances. Are you wanting to query all four 
instances or only the "global" one? Are you wanting to copy all data to 
the "global" instance or only some metrics? Every data point, or only at 
a lower frequency?


If you are intending to copy all data (both metrics & data points) that 
leans towards remote write as federation works differently. But in that 
case there doesn't seem to be any advantage in having the extra three 
instances at all (unless you are intending on doing local querying, 
alerting or recording rules) - so I'd just have a single instance that 
scrapes all namespaces.


Alternatively if you are needing to have separate instances with local 
storage/querying then I'd probably not look to copy all the data to the 
"global" instance (which just doubles storage and memory usage) and 
either use remote write for a much smaller subset of metrics, federation 
with a slower scrape rate/reduced set of metrics, or as Ben suggested 
something like Thanos (other options exist as well) to do away with the 
fourth instance entirely and distribute the queries to the individual 
instances instead.


Maybe if you could explain a bit about what the design is hoping to 
achieve it would help us advise better?


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/a98eb9c0-21ce-ecac-7bb6-100b28d50986%40Jahingo.com.


Re: [prometheus-users] Re: Https issue when using prometheus federation

2022-07-19 Thread Julien Pivotto
Could you try adding http2_enable: false and see if there is an improvement?

Le mar. 19 juil. 2022, 15:51, Shi Yan  a écrit :

> Thanks, Brian for helping look into it.
>
> Yes, in our setup, `another_prom_server` is deployed on the k8s cluster
> and it is behind an F5 ingress proxy, which terminates the TLS protocol. So
> we use HTTPS here.
> And I've tried to add port 443 explicitly in the targets config, but the
> error is still the same.
>
> msg="Scrape failed" err="Get \"
> https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D\
> ":
> read tcp x.x.x.x:58342->y.y.y.y:443: read: connection reset by peer"
>
> While I can manually curl it with either
>  > curl https://example.com
>  Found
>
> or the one with the exact URL parameters from the error msg.
>  > curl '
> https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D'
>  .# can get all the metrics correctly
>
>
> Cheers
>
>
>
> On Tuesday, July 19, 2022 at 6:25:00 PM UTC+10 Brian Candler wrote:
>
>> Can you show the exact curl command line, with just the hostname replaced
>> with "example.com" ?
>>
>> Try:
>>
>>   - targets:
>> - another_prom_server:9090
>>
>> or
>>
>>   - targets:
>> - another_prom_server:443
>>
>> or whatever is appropriate.  (I note you set "scheme: https" - is that
>> correct? Is this prometheus running behind a reverse proxy or ingress
>> proxy, or configured with web.config to serve TLS?)
>>
>> On Tuesday, 19 July 2022 at 06:13:38 UTC+1 yansh...@gmail.com wrote:
>>
>>> I am trying to configure the prometheus federation, but the target is
>>> not up and the only error I can see is `read: connection reset by peer`
>>>
>>> The scrape_config I've added is as follows:
>>>
>>> - job_name: federate
>>>   scrape_interval: 30s
>>>   scrape_timeout: 15s
>>>   scheme: https
>>>   honor_labels: true
>>>   metrics_path: "/federate"
>>>   params:
>>> match[]:
>>> - '{job="jobname"}'
>>>   static_configs:
>>>   - targets:
>>> - another_prom_server
>>>
>>> But if I use `curl` command from this central prometheus server, it
>>> works and can return the metrics correctly.
>>>
>>> another_prom_server is the one deployed by kube-prometheus-stack helm
>>> chart. Not sure what is the issue here? Could anyone help advise, thanks!
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/9304c742-68a9-427d-8449-ec6cf83e0f9an%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAFJ6V0o2N8FwQCYMiuVB6R33NZA_rAwcbPxy9wHsCdRi7dGExw%40mail.gmail.com.


[prometheus-users] Re: Https issue when using prometheus federation

2022-07-19 Thread Shi Yan
Thanks, Brian for helping look into it.

Yes, in our setup, `another_prom_server` is deployed on the k8s cluster and 
it is behind an F5 ingress proxy, which terminates the TLS protocol. So we 
use HTTPS here.
And I've tried to add port 443 explicitly in the targets config, but the 
error is still the same.

msg="Scrape failed" err="Get 
\"https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D\": 
read tcp x.x.x.x:58342->y.y.y.y:443: read: connection reset by peer"

While I can manually curl it with either
 > curl https://example.com
 Found

or the one with the exact URL parameters from the error msg.
 > curl 
'https://example.com:443/federate?match%5B%5D=%7Bjob%3D%22jobname%22%7D'
 .# can get all the metrics correctly


Cheers
 


On Tuesday, July 19, 2022 at 6:25:00 PM UTC+10 Brian Candler wrote:

> Can you show the exact curl command line, with just the hostname replaced 
> with "example.com" ?
>
> Try:
>
>   - targets:
> - another_prom_server:9090
>
> or
>
>   - targets:
> - another_prom_server:443
>
> or whatever is appropriate.  (I note you set "scheme: https" - is that 
> correct? Is this prometheus running behind a reverse proxy or ingress 
> proxy, or configured with web.config to serve TLS?)
>
> On Tuesday, 19 July 2022 at 06:13:38 UTC+1 yansh...@gmail.com wrote:
>
>> I am trying to configure the prometheus federation, but the target is not 
>> up and the only error I can see is `read: connection reset by peer`
>>
>> The scrape_config I've added is as follows:
>>
>> - job_name: federate
>>   scrape_interval: 30s
>>   scrape_timeout: 15s
>>   scheme: https
>>   honor_labels: true
>>   metrics_path: "/federate"
>>   params:
>> match[]:
>> - '{job="jobname"}'
>>   static_configs:
>>   - targets:
>> - another_prom_server
>>
>> But if I use `curl` command from this central prometheus server, it works 
>> and can return the metrics correctly. 
>>
>> another_prom_server is the one deployed by kube-prometheus-stack helm 
>> chart. Not sure what is the issue here? Could anyone help advise, thanks! 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/9304c742-68a9-427d-8449-ec6cf83e0f9an%40googlegroups.com.


Re: [prometheus-users] Extracting Group Data String for Alert Grouping

2022-07-19 Thread Brian Bowen
Thanks to both of you. The regex matching with metric_relabel_configs 
worked for my scenario since ifAlias is well standardized. I had previously 
tried this but misunderstood how to get the regex matching. The example 
above pushed us in the right direction.

On Tuesday, July 19, 2022 at 3:18:53 AM UTC-5 Brian Candler wrote:

> Alternatively, I'm not sure about this, but I think you could just add 
> these extra labels in your alerting rules.
>
> Labels added there are templated, and there are various template functions 
> available, including reReplaceAll:
>
> https://prometheus.io/docs/prometheus/latest/configuration/template_reference/
>
> However it would have to be repeated on every alerting rule where you 
> wanted to do this sort of grouping.
>
> On Monday, 18 July 2022 at 21:50:16 UTC+1 sup...@gmail.com wrote:
>
>> If you have your ifAlias well standardized you can use 
>> metric_relabel_configs to extract data.
>>
>> metric_relabel_configs:
>> - source_labels: [ifAlias]
>>   regex: "(.+) - (.+) - (.+)"
>>   replacement: "$1"
>>   target_label: port_description
>> - source_labels: [ifAlias]
>>   regex: "(.+) - (.+) - (.+)"
>>   replacement: "$2"
>>   target_label: port_location
>> - source_labels: [ifAlias]
>>   regex: "(.+) - (.+) - (.+)"
>>   replacement: "$3"
>>   target_label: cable_id
>>
>> This will separate out your ifAlias into the component label parts.
>>
>> On Mon, Jul 18, 2022 at 10:45 PM Brian Bowen  wrote:
>>
>>> Hi all,
>>>
>>> We are attempting to set up alerting with Prometheus and Alertmanager 
>>> using some SNMP data. The basic use case is that we would like to group by 
>>> a substring of label data rather than an entire label. Let's say our 
>>> interfaces have the ifAlias label in the following format:
>>> ifAlias=" - device 1 port 5 to device 2 port 7 - 
>>> " and I want to group alerts only by "device 1 port 5 to device 2 
>>> port 7" (assuming this description is consistent across  both devices), 
>>> leaving the rest of the description and cableID out.
>>>
>>> Is there a way to do this? We have not had success extracting this as a 
>>> separate label through snmp_exporter. I thought potentially we could do 
>>> some regex matching under the group_by rules with Alertmanager, but I 
>>> haven't seen any documentation/examples showing how to do this either.
>>>
>>> Let me know if there are any files I should attach.
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Prometheus Users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to prometheus-use...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/prometheus-users/31b5a66a-0aa5-432b-b527-764ac392e1d4n%40googlegroups.com
>>>  
>>> 
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/bf3644b1-6c00-42ec-85db-abb358a4fdd8n%40googlegroups.com.


Re: [prometheus-users] Use remote-write instead of federation

2022-07-19 Thread tejaswini vadlamudi
@Ben: Makes a point, but getting Thanos or Cortex into the picture could be 
a way forward after some time. For now, do you think it is good enough to 
use remote-write instead of federation?  From a performance and resource 
consumption POV, do you see remote-write as the way-forward?

Thanks, Teja

On Monday, July 18, 2022 at 11:02:57 PM UTC+2 sup...@gmail.com wrote:

> Yes, Thanos will eliminate the need for instance-4. At the same time it's 
> more efficient because it doesn't use remote write or federation. It can 
> query data from all your Prometheus instances.
>
> On Mon, Jul 18, 2022 at 10:53 PM tejaswini vadlamudi  
> wrote:
>
>> @Ben: Thanks for the suggestion! I heard that remote-write consumes more 
>> system resources like CPU utilization when compared to the federation. I 
>> can test and cross-check it myself but I would like to hear feedback from 
>> the Prometheus experts.
>> @Stuart: Ideally, it is possible to manage the complete stack with 
>> instance-1 but the current case is about deploying and monitoring multiple 
>> workloads/software owned by different vendors.
>>
>> /Teja
>> On Monday, July 18, 2022 at 8:52:59 PM UTC+2 Stuart Clark wrote:
>>
>>> On 18/07/2022 18:00, tejaswini vadlamudi wrote:
>>>
>>> Hello Stuart,  
>>>
>>> I have the 4 Prometheus instances in the same cluster.   
>>>
>>>- Instance-1, monitoring k8s & cadvisor 
>>>- Instance-2, monitoring workload-1 in namespace-1 
>>>- Instance-3, monitoring workload-2 in namespace-2 
>>>- Instance-4 is the central one collecting metrics from all 3 
>>>instances (for global querying and alerting). not sure if the federation 
>>> is 
>>>a good fit for this sort of deployment pattern. 
>>>
>>> What's the reason for having all the different instances? Are these all 
>>> full instances of Prometheus (with local storage) or using agent mode?
>>>
>>> If you are just going to copy everything to the "central" instance on 
>>> the same cluster, why not just do without the extra three clusters and have 
>>> just the one instance that monitors everything?
>>>
>>> -- 
>>> Stuart Clark
>>>
>>> -- 
>>
> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to prometheus-use...@googlegroups.com.
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-users/cd955a3f-fb4c-43f8-87e2-4eb006bc7df8n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/e3a7da84-afec-41cd-a8e8-be56261afd53n%40googlegroups.com.


Re: [prometheus-users] Best way to export status

2022-07-19 Thread Roman Baeriswyl
Great feedback as well, thanks.

I will add both metrics:
idrac_amperage_probe_status{index="1",statusName="other"} 0
idrac_amperage_probe_status{index="1",statusName="unknown"} 0
idrac_amperage_probe_status{index="1",statusName="ok"} 1
idrac_amperage_probe_status{index="1",statusName="nonCriticalUpper"} 0
idrac_amperage_probe_status{index="1",statusName="criticalUpper"} 0
idrac_amperage_probe_status{index="1",statusName="nonRecoverableUpper"} 0
idrac_amperage_probe_status{index="1",statusName="nonCriticalLower"} 0
idrac_amperage_probe_status{index="1",statusName="criticalLower"} 0
idrac_amperage_probe_status{index="1",statusName="nonRecoverableLower"} 0
idrac_amperage_probe_status{index="1",statusName="failed"} 0
idrac_amperage_probe_status{index="2",statusName="other"} 0
idrac_amperage_probe_status{index="2",statusName="unknown"} 0
idrac_amperage_probe_status{index="2",statusName="ok"} 1
idrac_amperage_probe_status{index="2",statusName="nonCriticalUpper"} 0
idrac_amperage_probe_status{index="2",statusName="criticalUpper"} 0
idrac_amperage_probe_status{index="2",statusName="nonRecoverableUpper"} 0
idrac_amperage_probe_status{index="2",statusName="nonCriticalLower"} 0
idrac_amperage_probe_status{index="2",statusName="criticalLower"} 0
idrac_amperage_probe_status{index="2",statusName="nonRecoverableLower"} 0
idrac_amperage_probe_status{index="2",statusName="failed"} 0

idrac_amperage_probe_status_code{index="1"} 3
idrac_amperage_probe_status_code{index="2"} 3

and probably make them configurable (which to show).

Am Di., 19. Juli 2022 um 12:18 Uhr schrieb Stuart Clark <
stuart.cl...@jahingo.com>:

> On 19/07/2022 10:41, Roman Baeriswyl wrote:
> > Why not both:
> >
> >
> idrac_amperage_probe_status{index="1",statusName="other",statusNumber="1"}
> > 0
> >
> idrac_amperage_probe_status{index="1",statusName="unknown",statusNumber="2"}
>
> > 0
> > idrac_amperage_probe_status{index="1",statusName="ok",statusNumber="3"} 1
> >
> idrac_amperage_probe_status{index="1",statusName="nonCriticalUpper",statusNumber="4"}
>
> > 0
> >
> idrac_amperage_probe_status{index="1",statusName="criticalUpper",statusNumber="5"}
>
> > 0
> >
> idrac_amperage_probe_status{index="1",statusName="nonRecoverableUpper",statusNumber="6"}
>
> > 0
> >
> idrac_amperage_probe_status{index="1",statusName="nonCriticalLower",statusNumber="7"}
>
> > 0
> >
> idrac_amperage_probe_status{index="1",statusName="criticalLower",statusNumber="8"}
>
> > 0
> >
> idrac_amperage_probe_status{index="1",statusName="nonRecoverableLower",statusNumber="9"}
>
> > 0
> >
> idrac_amperage_probe_status{index="1",statusName="failed",statusNumber="10"}
>
> > 0
> >
> > This way, one can use the name or the number if that would be easier
> > (for < or > checks).
>
> The downside with numeric statuses is that you need more knowledge to
> use them compared with the label method. I have to know that 7 = unknown
> or 5 = too hot, etc.
>
> That suggestion wouldn't actually help BTW as the statusNumber is a
> label so you could only use regex matches rather than >/<. If you wanted
> that as well you'd need a separate metric
> (idrac_amperage_probe_status_number or something) that has no labels and
> just the 1-10 value.
>
> The value of that purely numeric status metric also depends on what the
> status values actually are. It might be more useful for things which
> "progress" (good, poor, bad, broken) but probably not for statuses which
> are unrelated (network error, disk error, hardware fault, temperature
> error) as you are unlikely to use >/<
>
> --
> Stuart Clark
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CA%2BKmifHtdqfozZEuS4WFMAOT9Amrmqsm4Q5C%2BW_SaryoOLNqeQ%40mail.gmail.com.


Re: [prometheus-users] Best way to export status

2022-07-19 Thread Stuart Clark

On 19/07/2022 10:41, Roman Baeriswyl wrote:

Why not both:

idrac_amperage_probe_status{index="1",statusName="other",statusNumber="1"} 
0
idrac_amperage_probe_status{index="1",statusName="unknown",statusNumber="2"} 
0

idrac_amperage_probe_status{index="1",statusName="ok",statusNumber="3"} 1
idrac_amperage_probe_status{index="1",statusName="nonCriticalUpper",statusNumber="4"} 
0
idrac_amperage_probe_status{index="1",statusName="criticalUpper",statusNumber="5"} 
0
idrac_amperage_probe_status{index="1",statusName="nonRecoverableUpper",statusNumber="6"} 
0
idrac_amperage_probe_status{index="1",statusName="nonCriticalLower",statusNumber="7"} 
0
idrac_amperage_probe_status{index="1",statusName="criticalLower",statusNumber="8"} 
0
idrac_amperage_probe_status{index="1",statusName="nonRecoverableLower",statusNumber="9"} 
0
idrac_amperage_probe_status{index="1",statusName="failed",statusNumber="10"} 
0


This way, one can use the name or the number if that would be easier 
(for < or > checks).


The downside with numeric statuses is that you need more knowledge to 
use them compared with the label method. I have to know that 7 = unknown 
or 5 = too hot, etc.


That suggestion wouldn't actually help BTW as the statusNumber is a 
label so you could only use regex matches rather than >/<. If you wanted 
that as well you'd need a separate metric 
(idrac_amperage_probe_status_number or something) that has no labels and 
just the 1-10 value.


The value of that purely numeric status metric also depends on what the 
status values actually are. It might be more useful for things which 
"progress" (good, poor, bad, broken) but probably not for statuses which 
are unrelated (network error, disk error, hardware fault, temperature 
error) as you are unlikely to use >/<


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/ad64d7e2-5bb5-ab98-c968-67015a38ee1d%40Jahingo.com.


Re: [prometheus-users] Best way to export status

2022-07-19 Thread Brian Candler
I don't think you can do numeric comparisons on labels(*). If you want both 
approaches, then you need two sets of metrics: a single metric with a value 
of 3, and another set of metrics giving the 10 booleans.

(*) apart from a regex like `[1-5]`, in which case you might as well use 
`(other|unknown|ok|nonCriticalUpper|criticalUpper)`

On Tuesday, 19 July 2022 at 10:41:47 UTC+1 r.bae...@gmail.com wrote:

> Why not both:
>
> idrac_amperage_probe_status{index="1",statusName="other",statusNumber="1"} 
> 0
> idrac_amperage_probe_status{index="1",statusName="unknown",statusNumber="2"} 
> 0
> idrac_amperage_probe_status{index="1",statusName="ok",statusNumber="3"} 1
> idrac_amperage_probe_status{index="1",statusName="nonCriticalUpper",statusNumber="4"}
>  
> 0
> idrac_amperage_probe_status{index="1",statusName="criticalUpper",statusNumber="5"}
>  
> 0
> idrac_amperage_probe_status{index="1",statusName="nonRecoverableUpper",statusNumber="6"}
>  
> 0
> idrac_amperage_probe_status{index="1",statusName="nonCriticalLower",statusNumber="7"}
>  
> 0
> idrac_amperage_probe_status{index="1",statusName="criticalLower",statusNumber="8"}
>  
> 0
> idrac_amperage_probe_status{index="1",statusName="nonRecoverableLower",statusNumber="9"}
>  
> 0
> idrac_amperage_probe_status{index="1",statusName="failed",statusNumber="10"} 
> 0
>
> This way, one can use the name or the number if that would be easier (for 
> < or > checks).
>
> Am Di., 19. Juli 2022 um 05:32 Uhr schrieb Ben Kochie :
>
>> With PromQL, the state label with a boolean value tends to be more 
>> user-friendly.
>>
>> For example, you can do things like `avg_over_time(foo{state="some 
>> state"}[10m])` to detect problems, but maybe ignore one or two state 
>> changes.
>>
>> Similarly, you can be more specific about states with 
>> `changes_over_time()`.
>>
>> On Tue, Jul 19, 2022 at 12:21 AM Roman Baeriswyl  
>> wrote:
>>
>>> True, the amount should not be an issue at all.
>>> I wonder what is more convenient for the end user: having 10 states per 
>>> sensor but with their state name as label, or just having one with the 
>>> numerical value (which would allow > and < operations for alerts). I cannot 
>>> decide between those two.
>>>
>>> Regarding the other projects: I've looked thru many projects. The first 
>>> one you mention need to actually run on the dell server itself, which I do 
>>> not want. The second contains only a few metrics and uses the Redfish api 
>>> (basically JSON, but I think a bit limited, especially for older systems). 
>>> There are also a lot of others, mostly based on prometheus/snmp_exporter 
>>> but they also lack a lot of metrics. In my first try, I created my own 
>>> snmp_exporter generator (https://github.com/Roemer/idrac-snmp-exporter) 
>>> even with a fully working automatic pipeline. But I find the generator way 
>>> too restrictive.
>>> I am now working on a node based exporter with express, prom-client and 
>>> net-snmp and it seems to work fairly well. I can export what I want, 
>>> exactly how I want. This is the v2 branch which only exposes one set of 
>>> metrics.
>>>
>>>
>>> Am Mo., 18. Juli 2022 um 23:14 Uhr schrieb Ben Kochie >> >:
>>>
 Let's do the math:

 100 servers * 10 states * 20 sensors = 20,000 metrics

 Worst case, say you have 5000 metrics each for 100 servers, that's 
 still only 500,000 series. This will probably take about 4GiB of memory. 
 It 
 should still fit easily in an 8GiB memory instance.

 A single Prometheus can handle millions of metrics if you capacity plan 
 accordingly.

 Rather than SNMP, have you looked at 
 https://github.com/galexrt/dellhw_exporter? Or maybe 
 https://github.com/mrlhansen/idrac_exporter?

 On Mon, Jul 18, 2022 at 10:50 PM Roman Baeriswyl  
 wrote:

> Thanks for the answer. Well, it is not only fans, there are dozens of 
> other status fields as well (i'm doing an idrac snmp exporter). And that 
> for technically dozens of servers. Should I try to stick with the 
> StateSet 
> or should I switch to just expose the numerical represenation?
>
> sup...@gmail.com schrieb am Sonntag, 17. Juli 2022 um 10:50:43 UTC+2:
>
>> For things that have state changes you care about, I usually 
>> recommend EnumAsStateSet.
>>
>> The good news is that Prometheus deals with compressing the boolean 
>> values very well. And since all fans have the same set of states, those 
>> values are deduplicated in the index.
>>
>> So while it looks like a lot in the metric output, it stores well in 
>> the TSDB.
>>
>> The question is, how many fans on how many servers are we talking 
>> about?
>>
>> On Sun, Jul 17, 2022 at 6:26 AM Roman Baeriswyl  
>> wrote:
>>
>>> Hey all
>>> I am working on a Dell iDRAC SNMP Exporter and I struggle with 
>>> "Status" fields.
>>> I think there are three main possibilities:
>>>

Re: [prometheus-users] Best way to export status

2022-07-19 Thread Roman Baeriswyl
Why not both:

idrac_amperage_probe_status{index="1",statusName="other",statusNumber="1"} 0
idrac_amperage_probe_status{index="1",statusName="unknown",statusNumber="2"}
0
idrac_amperage_probe_status{index="1",statusName="ok",statusNumber="3"} 1
idrac_amperage_probe_status{index="1",statusName="nonCriticalUpper",statusNumber="4"}
0
idrac_amperage_probe_status{index="1",statusName="criticalUpper",statusNumber="5"}
0
idrac_amperage_probe_status{index="1",statusName="nonRecoverableUpper",statusNumber="6"}
0
idrac_amperage_probe_status{index="1",statusName="nonCriticalLower",statusNumber="7"}
0
idrac_amperage_probe_status{index="1",statusName="criticalLower",statusNumber="8"}
0
idrac_amperage_probe_status{index="1",statusName="nonRecoverableLower",statusNumber="9"}
0
idrac_amperage_probe_status{index="1",statusName="failed",statusNumber="10"}
0

This way, one can use the name or the number if that would be easier (for <
or > checks).

Am Di., 19. Juli 2022 um 05:32 Uhr schrieb Ben Kochie :

> With PromQL, the state label with a boolean value tends to be more
> user-friendly.
>
> For example, you can do things like `avg_over_time(foo{state="some
> state"}[10m])` to detect problems, but maybe ignore one or two state
> changes.
>
> Similarly, you can be more specific about states with
> `changes_over_time()`.
>
> On Tue, Jul 19, 2022 at 12:21 AM Roman Baeriswyl 
> wrote:
>
>> True, the amount should not be an issue at all.
>> I wonder what is more convenient for the end user: having 10 states per
>> sensor but with their state name as label, or just having one with the
>> numerical value (which would allow > and < operations for alerts). I cannot
>> decide between those two.
>>
>> Regarding the other projects: I've looked thru many projects. The first
>> one you mention need to actually run on the dell server itself, which I do
>> not want. The second contains only a few metrics and uses the Redfish api
>> (basically JSON, but I think a bit limited, especially for older systems).
>> There are also a lot of others, mostly based on prometheus/snmp_exporter
>> but they also lack a lot of metrics. In my first try, I created my own
>> snmp_exporter generator (https://github.com/Roemer/idrac-snmp-exporter)
>> even with a fully working automatic pipeline. But I find the generator way
>> too restrictive.
>> I am now working on a node based exporter with express, prom-client and
>> net-snmp and it seems to work fairly well. I can export what I want,
>> exactly how I want. This is the v2 branch which only exposes one set of
>> metrics.
>>
>>
>> Am Mo., 18. Juli 2022 um 23:14 Uhr schrieb Ben Kochie :
>>
>>> Let's do the math:
>>>
>>> 100 servers * 10 states * 20 sensors = 20,000 metrics
>>>
>>> Worst case, say you have 5000 metrics each for 100 servers, that's still
>>> only 500,000 series. This will probably take about 4GiB of memory. It
>>> should still fit easily in an 8GiB memory instance.
>>>
>>> A single Prometheus can handle millions of metrics if you capacity plan
>>> accordingly.
>>>
>>> Rather than SNMP, have you looked at
>>> https://github.com/galexrt/dellhw_exporter? Or maybe
>>> https://github.com/mrlhansen/idrac_exporter?
>>>
>>> On Mon, Jul 18, 2022 at 10:50 PM Roman Baeriswyl 
>>> wrote:
>>>
 Thanks for the answer. Well, it is not only fans, there are dozens of
 other status fields as well (i'm doing an idrac snmp exporter). And that
 for technically dozens of servers. Should I try to stick with the StateSet
 or should I switch to just expose the numerical represenation?

 sup...@gmail.com schrieb am Sonntag, 17. Juli 2022 um 10:50:43 UTC+2:

> For things that have state changes you care about, I usually recommend
> EnumAsStateSet.
>
> The good news is that Prometheus deals with compressing the boolean
> values very well. And since all fans have the same set of states, those
> values are deduplicated in the index.
>
> So while it looks like a lot in the metric output, it stores well in
> the TSDB.
>
> The question is, how many fans on how many servers are we talking
> about?
>
> On Sun, Jul 17, 2022 at 6:26 AM Roman Baeriswyl 
> wrote:
>
>> Hey all
>> I am working on a Dell iDRAC SNMP Exporter and I struggle with
>> "Status" fields.
>> I think there are three main possibilities:
>>
>> 1. EnumAsStateSet
>> The downside here is that it can really clutter the output. For
>> example the Dell Fans have 10 possible status, so each fan has 10 fields
>> where only one is set to "1".
>>
>> 2. EnumAsInfo
>> The downside here is that have not so nice time history and it is
>> probably harder to create alerts.
>>
>> 3. Use the numeric value
>> The downside here is that you need to do the enum lookup in the alert
>> / dashboard.
>>
>> What do you think is in general the best way for such status?
>>
>> Thanks for your input.
>>
>> --
>> 

[prometheus-users] Re: Https issue when using prometheus federation

2022-07-19 Thread Brian Candler
Can you show the exact curl command line, with just the hostname replaced 
with "example.com" ?

Try:

  - targets:
- another_prom_server:9090

or

  - targets:
- another_prom_server:443

or whatever is appropriate.  (I note you set "scheme: https" - is that 
correct? Is this prometheus running behind a reverse proxy or ingress 
proxy, or configured with web.config to serve TLS?)

On Tuesday, 19 July 2022 at 06:13:38 UTC+1 yansh...@gmail.com wrote:

> I am trying to configure the prometheus federation, but the target is not 
> up and the only error I can see is `read: connection reset by peer`
>
> The scrape_config I've added is as follows:
>
> - job_name: federate
>   scrape_interval: 30s
>   scrape_timeout: 15s
>   scheme: https
>   honor_labels: true
>   metrics_path: "/federate"
>   params:
> match[]:
> - '{job="jobname"}'
>   static_configs:
>   - targets:
> - another_prom_server
>
> But if I use `curl` command from this central prometheus server, it works 
> and can return the metrics correctly. 
>
> another_prom_server is the one deployed by kube-prometheus-stack helm 
> chart. Not sure what is the issue here? Could anyone help advise, thanks! 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/29251b7e-d75e-46d9-b2a7-0cfd07010040n%40googlegroups.com.


Re: [prometheus-users] Extracting Group Data String for Alert Grouping

2022-07-19 Thread Brian Candler
Alternatively, I'm not sure about this, but I think you could just add 
these extra labels in your alerting rules.

Labels added there are templated, and there are various template functions 
available, including reReplaceAll:
https://prometheus.io/docs/prometheus/latest/configuration/template_reference/

However it would have to be repeated on every alerting rule where you 
wanted to do this sort of grouping.

On Monday, 18 July 2022 at 21:50:16 UTC+1 sup...@gmail.com wrote:

> If you have your ifAlias well standardized you can use 
> metric_relabel_configs to extract data.
>
> metric_relabel_configs:
> - source_labels: [ifAlias]
>   regex: "(.+) - (.+) - (.+)"
>   replacement: "$1"
>   target_label: port_description
> - source_labels: [ifAlias]
>   regex: "(.+) - (.+) - (.+)"
>   replacement: "$2"
>   target_label: port_location
> - source_labels: [ifAlias]
>   regex: "(.+) - (.+) - (.+)"
>   replacement: "$3"
>   target_label: cable_id
>
> This will separate out your ifAlias into the component label parts.
>
> On Mon, Jul 18, 2022 at 10:45 PM Brian Bowen  wrote:
>
>> Hi all,
>>
>> We are attempting to set up alerting with Prometheus and Alertmanager 
>> using some SNMP data. The basic use case is that we would like to group by 
>> a substring of label data rather than an entire label. Let's say our 
>> interfaces have the ifAlias label in the following format:
>> ifAlias=" - device 1 port 5 to device 2 port 7 - 
>> " and I want to group alerts only by "device 1 port 5 to device 2 
>> port 7" (assuming this description is consistent across  both devices), 
>> leaving the rest of the description and cableID out.
>>
>> Is there a way to do this? We have not had success extracting this as a 
>> separate label through snmp_exporter. I thought potentially we could do 
>> some regex matching under the group_by rules with Alertmanager, but I 
>> haven't seen any documentation/examples showing how to do this either.
>>
>> Let me know if there are any files I should attach.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to prometheus-use...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-users/31b5a66a-0aa5-432b-b527-764ac392e1d4n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/b8ca61fe-d03e-4554-a6cf-9cb61fcaff6an%40googlegroups.com.