Re: [prometheus-users] How is Prometheus Remembering this instance in this query?

2021-02-04 Thread Julius Volz
This is an effect of the automatic join (vector matching) that Prometheus
does around binary operators. By default, a binary operator will look for
series with exactly identical label sets on the left and right side of the
operator, and then make an identically labeled output series with that
operation applied. For series that do *not* find an exact correspondence on
the other side, they are simply dropped from the result. So if one of your
sides is a subset of the other (as in your case), only that subset will
find a label match and make it into the result. You will still have some
cost though by first selecting those extra series that then get thrown away
by the binary operator matching, so it may or may not be a good idea
(efficiency-wise) to do the filtering everywhere (if otherwise the set of
selected series would be very large).

I visualized it with a somewhat simpler scenario here, see the "Explain"
tab on this query showing how two of the three series on the right side do
not find a match on the left side:

https://demo.promlens.com/?l=bqxzm3klwDa

On Thu, Feb 4, 2021 at 7:33 PM Kristopher Kahn  wrote:

> I have this query:
>
> *((node_memory_MemTotal{instance="hostname.example.com:9100
> "} - (node_memory_MemFree +
> node_memory_Buffers + node_memory_Cached)) / node_memory_MemTotal) * 100 *
>
> I'm surprised to find that every other metric after:
> *node_memory_MemTotal{instance="hostname.example.com:9100
> "}*
>
> "knows" to use that instance, such as node_memory_MemFree, and that I
> don't need to declare that instance every time thereafter so that the query
> grows into:
>
> *((node_memory_MemTotal{instance="hostname.example.com:9100
> "} -
> (node_memory_MemFree{instance="hostname.example.com:9100
> "} +
> node_memory_Buffers{instance="hostname.example.com:9100
> "} +
> node_memory_Cached{instance="hostname.example.com:9100
> "})) /
> node_memory_MemTotal{instance="hostname.example.com:9100
> "}) * 100*
>
> How does Prometheus know to keep and use that same returned instance for
> the entire query?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/06b8d835-2c30-43dd-9467-16f6c032e28an%40googlegroups.com
> 
> .
>


-- 
Julius Volz
PromLabs - promlabs.com

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAObpH5zYbTqrJ37sskrM2mgqHYS2VC7V0mkL%2BSvbh1%3DTPGjFFg%40mail.gmail.com.


[prometheus-users] How is Prometheus Remembering this instance in this query?

2021-02-04 Thread Kristopher Kahn
I have this query:

*((node_memory_MemTotal{instance="hostname.example.com:9100"} - 
(node_memory_MemFree + node_memory_Buffers + node_memory_Cached)) / 
node_memory_MemTotal) * 100 *

I'm surprised to find that every other metric after:
*node_memory_MemTotal{instance="hostname.example.com:9100"}*

"knows" to use that instance, such as node_memory_MemFree, and that I don't 
need to declare that instance every time thereafter so that the query grows 
into:

*((node_memory_MemTotal{instance="hostname.example.com:9100"} - 
(node_memory_MemFree{instance="hostname.example.com:9100"} + 
node_memory_Buffers{instance="hostname.example.com:9100"} + 
node_memory_Cached{instance="hostname.example.com:9100"})) / 
node_memory_MemTotal{instance="hostname.example.com:9100"}) * 100* 

How does Prometheus know to keep and use that same returned instance for 
the entire query? 

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/06b8d835-2c30-43dd-9467-16f6c032e28an%40googlegroups.com.


Re: [prometheus-users] Trouble scraping Custom Python Metric in OpenShift 3.11 + Prometheus

2021-02-04 Thread Jeff Tippey
Thanks Stuart,

I have looked around and I can't find it.   I'd be grateful if someone 
could take a look at one of these dropped pods labels and my scrape config 
and let me know if you see where it is dropped.

Not working Config

 __address__="" 
__meta_kubernetes_endpoint_address_target_kind="Pod" 
__meta_kubernetes_endpoint_address_target_name="rcax-ios0-br-1-56z6v"
__meta_kubernetes_endpoint_port_name="queuemonitor"
__meta_kubernetes_endpoint_port_protocol="TCP"
__meta_kubernetes_endpoint_ready="true"
__meta_kubernetes_endpoints_name="rcax-ios-0" 
__meta_kubernetes_namespace="envb" 
__meta_kubernetes_pod_annotation_openshift_io_deployment_config_latest_version="1"
 
__meta_kubernetes_pod_annotation_openshift_io_deployment_config_name="rcax-ios0-br"
 
__meta_kubernetes_pod_annotation_openshift_io_deployment_name="rcax-ios0-br-1" 
__meta_kubernetes_pod_annotation_openshift_io_scc="restricted" 
__meta_kubernetes_pod_container_name="queuemonitor" 
__meta_kubernetes_pod_container_port_name="" 
__meta_kubernetes_pod_container_port_number="8001" 
__meta_kubernetes_pod_container_port_protocol="TCP" 
__meta_kubernetes_pod_controller_kind="ReplicationController"
__meta_kubernetes_pod_controller_name="rcax-ios0-br-1" 
__meta_kubernetes_pod_host_ip="" 
__meta_kubernetes_pod_ip="" 
__meta_kubernetes_pod_label_component="rcax-ios0-br"
__meta_kubernetes_pod_label_deployment="rcax-ios0-br-1"
__meta_kubernetes_pod_label_deploymentconfig="rcax-ios0-br" 
__meta_kubernetes_pod_label_scac="rcax" 
__meta_kubernetes_pod_label_subsystem="rcax-ios0"
__meta_kubernetes_pod_name="rcax-ios0-br-1-56z6v"
__meta_kubernetes_pod_node_name="stb-node002"
__meta_kubernetes_pod_ready="true"
__meta_kubernetes_pod_uid="356fe668-5a73-11eb-bf8a-0050568584bc" 
__meta_kubernetes_service_annotation_prometheus_io_port="8001"
__meta_kubernetes_service_annotation_prometheus_io_scheme="http" 
__meta_kubernetes_service_annotation_prometheus_io_scrape="true"
__meta_kubernetes_service_label_name="rcax-ios0"
__meta_kubernetes_service_name="rcax-ios-0" 
__metrics_path__="/metrics" 
__scheme__="http" 
job="kubernetes-service-endpoints" 


Scrape Config

- job_name: 'kubernetes-service-endpoints'
tls_config:
  ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  # TODO: this should be per target
  insecure_skip_verify: true
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
# only scrape infrastructure components
- source_labels: [__meta_kubernetes_namespace]
  action: keep
  regex: 
'default|logging|metrics|kube-.+|openshift|openshift-.+|envb|envc'
# drop infrastructure components managed by other scrape targets
- source_labels: [__meta_kubernetes_service_name]
  action: drop
  regex: 'prometheus-node-exporter'
# only those that have requested scraping
- source_labels: 
[__meta_kubernetes_service_annotation_prometheus_io_scrape]
  action: keep
  regex: true
- source_labels: 
[__meta_kubernetes_service_annotation_prometheus_io_scheme]
  action: replace
  target_label: __scheme__
  regex: (https?)
- source_labels: 
[__meta_kubernetes_service_annotation_prometheus_io_path]
  action: keep
  target_label: __metrics_path__
  regex: (.+)
- source_labels: [__address__, 
__meta_kubernetes_service_annotation_prometheus_io_port]
  action: replace
  target_label: __address__
  regex: (.+)(?::\d+);(\d+)
  replacement: $1:$2
- action: labelmap
  regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
  action: replace
  target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
  action: replace
  target_label: kubernetes_name


 
On Monday, January 25, 2021 at 4:35:28 PM UTC-6 Stuart Clark wrote:

> On 25/01/2021 22:15, Jeff Tippey wrote:
>
> Hi, 
>
> I have a custom python container I wrote that exports metrics on port 
> 8001 at "/metrics". The setup used to work in Openshift 3.6 with 
> Prometheus.   I'm using the Prometheus configuration from github at 
> prometheus 
> github link 
> . 
>  
>
>In my deployment yamls for my container,   For the list (in deployment 
> yaml) I have annotation
>
>  annotations:
> prometheus.io/path: /metrics
> prometheus.io/port: "8001"
> prometheus.io/scrape: "true"
>
> and the service definition I have 
>
>  annotations:
>   prometheus.io/scrape: "true"
>   prometheus.io/scheme: http
>   prometheus.io/port: "8001"
>
> To repeat, this worked in 3.6 configuration.  However, when I open up 
> Prometheus this is not a target.  Under Targets, none of my service 
> endpoints are showing up.  I do see them in t

[prometheus-users] Reg: storage retention not working.

2021-02-04 Thread RAJESH DASARI
Hi ,

i am running prometheus with this option
--storage.tsdb.retention.time=24h but i see that storage space is not
cleared for every 24hrs. I see that data is stored from months and
disk is getting full and prometheus service is not starting due to
this reason.

prometheus[30086]: level=error ts=2021-01-24T04:04:00.908Z
caller=main.go:740 err="opening storage failed: open
/opt/pm/prometheus/wal/00131056: no space left on device"

Could someone please help, why this can happen and how to avoid this.

This issue is seen in one instance and we had prometheus in multiple
instances. Prometheus version used was 0.18.1

Thanks,
Rajesh Dasari.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAPXMrf_CfEED27DS%3DkKGMtGpR4Qf-oNXkkXVL%2BFMatEP2tVwCg%40mail.gmail.com.


[prometheus-users] Prometheus azure SD throttling Azure API limit

2021-02-04 Thread Kirti Ranjan Parida
Hello everyone 
I have a question regarding azure_sd_configs. My job is discovering 1600 
targets(refresh_interval=5mins) out of which only 88 are active targets and 
others are dropped by prometheus relabelling. However We are hitting 
compute/network API limit just because of this and other application are 
not able to make API calls

   - Does Prometheus makes describe call to all the instances/NIC’s  that 
   it discovers ?
   - Any other way to restrict the number of API calls 
   since azure_sd_configs does not support filter:  ?


Any other workaround ? 

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/f1e196d3-11ac-4ddb-9a2e-13524e3aac6dn%40googlegroups.com.


[prometheus-users] Re: label_replace - extract string from label_value

2021-02-04 Thread Nabil L.
Hi Matthias,

This is exactly what I am looking for!

 Thanks a lot for your help!

Regards,
Neb

Le mardi 2 février 2021 à 08:52:07 UTC+1, Matthias Rieber a écrit :

> Hello,
>
> On Mon, 1 Feb 2021, Nabil L. wrote:
>
> > Any help ? :)
>
> maybe:
>
> sum without (image) (label_replace(kube_pod_container_info, "release", 
> "$1", "image", ".*:([^:]*)"))
>
> Regards,
> Matthias
>
> >
> > Le lundi 1 février 2021 à 21:10:14 UTC+1, Nabil L. a écrit :
> >
> >> Hi Folks,
> >>
> >> I wonder if someone know how to extract a string from a label_value and
> >> using the the label_replace .
> >>
> >> For example I have the following metric, which return
> >>
> >> *kube_pod_container_info* which return the following metric:
> >>
> >> {container="d2conf",
> >> 
> container_id="docker://9e9fdd34fa96abbf309c279a19fcae828646a7b11169292c7da05e3349c9",
> >> 
> image_id="docker-pullable://docker-registry/toto-jx/deploy-ws@sha256:d6c2db22b2b677a7c6017d6f3eeb4055750750349a48b716d6215721ccce1aa8",
> >> instance="kube-state-metrics.kube-system.svc.cluster.local:8080",
> >> job="kube-state-metrics", namespace="toto", pod="d2conf-1",
> >> image="docker-registry/toto/deploy-ws:1.4.2"}
> >>
> >> Using the label_replace, the label *image *is replaced by *release*, but
> >> what I need as well is to extract the string after character "*:*" in 
> the
> >> label value (below in yellow)
> >>
> >> sum without (image) (label_replace(kube_pod_container_info, "release",
> >> "$1", "image", "(.*)"))
> >>
> >>
> >> {container="d2conf",
> >> 
> container_id="docker://9e9fdd34fa96abbf309c279a19fcae828646a7b11169292c7da05e3349c9",
> >> 
> image_id="docker-pullable://docker-registry/toto-jx/deploy-ws@sha256:d6c2db22b2b677a7c6017d6f3eeb4055750750349a48b716d6215721ccce1aa8",
> >> instance="kube-state-metrics.kube-system.svc.cluster.local:8080",
> >> job="kube-state-metrics", namespace="toto", pod="d2conf-1", *release*
> >> ="docker-registry/toto/deploy-ws:1.4.2"}
> >>
> >>
> >> any idea?
> >>
> >> Thanks in advance
> >> Neb.
> >>
> >
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "Prometheus Users" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to prometheus-use...@googlegroups.com.
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/prometheus-users/4acbe3b4-a5c3-4655-adf6-53b0e8b15fe4n%40googlegroups.com
> .
> >
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/4ec91857-3ab5-490d-8880-e89b97e9db28n%40googlegroups.com.


[prometheus-users] Deploy Prometheus on a Docker Swarm Cluster specifically on a worker node

2021-02-04 Thread thomas.b...@gmail.com
Hello,

I want to know if we have to install Prometheus on a Docker Swarm Manager o 
if it is possible to install it to a Worker one to retrieve Docker Swarm 
Cluster metrics.

I actually encountered the following issue:
monitoring_prometheus.1.ntwl9vdheus9@gb4p822c3| level=error 
ts=2021-02-04T10:09:38.463Z caller=refresh.go:98 component="discovery 
manager scrape" discovery=dockerswarm msg="Unable to refresh target groups" 
err="error while listing swarm services: Error response from daemon: This 
node is not a swarm manager. Worker nodes can't be used to view or modify 
cluster state. Please run this command on a manager node or promote the 
current node to a manager."

Which seems to be an issue that the Worker node cannot scap the metrics to 
the Manager (make sense).
Install Docker services on a Docker Swarm Manager is not in the Swarm 
best-practises so is there any possibility with an example or some 
documentation to use maybe the Docker HTTP other /var/run/docker.sock?

This is my prometheus.yml configuration:
global:
  scrape_interval: 30s
  scrape_timeout: 30s
  evaluation_interval: 5m
  external_labels:
monitor: testing-monitoring

scrape_configs:
  # Prometheus
  - job_name: prometheus
static_configs:
- targets:
  - prometheus:9090

  # Docker Swarm job
  - job_name: 'testing-dockerswarm'
dockerswarm_sd_configs:
  - host: unix:///var/run/docker.sock
role: tasks
relabel_configs:
  - source_labels: [__address__]
separator: ':'
regex: '(.*):(8089)'
target_label: __address__
replacement: '${1}:8080'
  - source_labels: [__address__]
regex: '(.+)\:(9100|8080|9307|9308)'
action: keep
  - source_labels: [__meta_dockerswarm_network_name]
regex: 'monitoring_.+'
action: keep
  - source_labels: [__meta_dockerswarm_task_desired_state]
regex: 'running'
action: keep
  - source_labels: [__meta_dockerswarm_node_hostname]
target_label: node_name
  - source_labels: [__meta_dockerswarm_node_id]
target_label: node_id

Thanks in advance for your time.

Kind Regards,
Thomas

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/10237b0e-9e88-45d9-b554-16e962cd07adn%40googlegroups.com.


Re: [prometheus-users] Prometheus.service startup failed

2021-02-04 Thread Julius Volz
Try "journalctl -u prometheus" to see the full logs of the Prometheus
service to see what the error is.

On Thu, Feb 4, 2021 at 6:32 AM Surya Prakash K 
wrote:

> Team,
>
> Prometheus.service startup failed
>
> version of prometheus: 2.3.2
>
> Can someone help me in this regard.
> [image: Capture.JPG]
>
> Thanks & Regards,
> Surya Prakash K
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/2d7e8a62-ef6a-4a44-8f44-96f50550ba7dn%40googlegroups.com
> 
> .
>


-- 
Julius Volz
PromLabs - promlabs.com

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAObpH5yDm%2BguN9EZTWpWZahKL-5eEHs_pgrYS__VnuiassVv0w%40mail.gmail.com.


Re: [prometheus-users] How to config Basic Auth with File-based Service Discovery?

2021-02-04 Thread Ben Teitelbaum
>
>
> I'd suggest using the same password. What would be the problem with
> that?
>

Proprietary data providers, different administrative control of upstream
systems. Not realistic to assume one set of HTTP basic auth credentials
across all targets.


> There is much more than just the UI ; you could use relabelling to set
> these, so we would need to filter the original labels, the transformed
> ones, etc.
>

That makes sense. Thanks for explaining.

-- ben

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAHrnr0bTOkNs4C%3DsN9LtdhwUbA4wjLyq89Ap56Y6hygg755J1w%40mail.gmail.com.


[prometheus-users] stream error: stream ID 1; HTTP_1_1_REQUIRED

2021-02-04 Thread E C.
Hello,

I 'm trying to figure what's wrong but don't have enough knowledge to 
understand the problem/solution to this.
While 
running 
localhost/probe?target=https://www.mydomain.com&module=http_2xx&debug=true
with logs

Logs for the probe:
ts=2021-02-04T08:02:31.351967671Z caller=main.go:304 module=http_2xx 
target=https://www.mydomain.com level=info msg="Beginning probe" probe=http 
timeout_seconds=119.5
ts=2021-02-04T08:02:31.352938804Z caller=http.go:342 module=http_2xx 
target=https://www.mydomain.com level=info msg="Resolving target address" 
ip_protocol=ip4
ts=2021-02-04T08:02:31.373379127Z caller=http.go:342 module=http_2xx 
target=https://www.mydomain.com level=info msg="Resolved target address" 
ip= ip-address
ts=2021-02-04T08:02:31.374250705Z caller=client.go:252 module=http_2xx 
target=https://www.mydomain.com level=info msg="Making HTTP request" 
url=https://ip-address host=www.mydomain.com
ts=2021-02-04T08:02:31.873059998Z caller=main.go:119 module=http_2xx 
target=https://www.mydomain.com level=error msg="Error for HTTP request" 
err="Get \"https://ip-address\": stream error: stream ID 1; 
HTTP_1_1_REQUIRED"
ts=2021-02-04T08:02:31.873204403Z caller=main.go:119 module=http_2xx 
target=https://www.mydomain.com level=info msg="Response timings for 
roundtrip" roundtrip=0 start=2021-02-04T08:02:31.375304633Z 
dnsDone=2021-02-04T08:02:31.375304633Z 
connectDone=2021-02-04T08:02:31.411457835Z 
gotConn=2021-02-04T08:02:31.60063595Z responseStart=0001-01-01T00:00:00Z 
end=0001-01-01T00:00:00Z
ts=2021-02-04T08:02:31.873441363Z caller=main.go:304 module=http_2xx 
target=https://www.mydomain.com level=error msg="Probe failed" 
duration_seconds=0.521195622



Metrics that would have been returned:
# HELP probe_dns_lookup_time_seconds Returns the time taken for probe dns 
lookup in seconds
# TYPE probe_dns_lookup_time_seconds gauge
probe_dns_lookup_time_seconds 0.02059956
# HELP probe_duration_seconds Returns how long the probe took to complete 
in seconds
# TYPE probe_duration_seconds gauge
probe_duration_seconds 0.521195622
# HELP probe_failed_due_to_regex Indicates if probe failed due to regex
# TYPE probe_failed_due_to_regex gauge
probe_failed_due_to_regex 0
# HELP probe_http_content_length Length of http content response
# TYPE probe_http_content_length gauge
probe_http_content_length 0
# HELP probe_http_duration_seconds Duration of http request by phase, 
summed over all redirects
# TYPE probe_http_duration_seconds gauge
probe_http_duration_seconds{phase="connect"} 0.036153461
probe_http_duration_seconds{phase="processing"} 0
probe_http_duration_seconds{phase="resolve"} 0.02059956
probe_http_duration_seconds{phase="tls"} 0.22532878
probe_http_duration_seconds{phase="transfer"} 0
# HELP probe_http_redirects The number of redirects
# TYPE probe_http_redirects gauge
probe_http_redirects 0
# HELP probe_http_ssl Indicates if SSL was used for the final redirect
# TYPE probe_http_ssl gauge
probe_http_ssl 0
# HELP probe_http_status_code Response HTTP status code
# TYPE probe_http_status_code gauge
probe_http_status_code 0
# HELP probe_http_uncompressed_body_length Length of uncompressed response 
body
# TYPE probe_http_uncompressed_body_length gauge
probe_http_uncompressed_body_length 0
# HELP probe_http_version Returns the version of HTTP of the probe response
# TYPE probe_http_version gauge
probe_http_version 0
# HELP probe_ip_addr_hash Specifies the hash of IP address. It's useful to 
detect if the IP address changes.
# TYPE probe_ip_addr_hash gauge
probe_ip_addr_hash 9.1643584e+07
# HELP probe_ip_protocol Specifies whether probe ip protocol is IP4 or IP6
# TYPE probe_ip_protocol gauge
probe_ip_protocol 4
# HELP probe_success Displays whether or not the probe was a success
# TYPE probe_success gauge
probe_success 0


Module configuration:
prober: http
http:
valid_http_versions:
- HTTP/1.1
- HTTP/2.0
preferred_ip_protocol: ip4
ip_protocol_fallback: true
method: GET
tcp:
ip_protocol_fallback: true
icmp:
ip_protocol_fallback: true
dns:
ip_protocol_fallback: true

It returns probe_success 0 and something about HTTP1.1 required as error, 
but I already have it on valid_http_versions.

Thank you for your time.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/1b3590aa-4075-4e74-b8d1-6663e8ff6769n%40googlegroups.com.