Re: [prometheus-users] fading out sample resolution for samples from longer ago possible?

2023-02-20 Thread Stuart Clark

On 21/02/2023 03:29, Christoph Anton Mitterer wrote:

Hey.

I wondered whether one can to with Prometheus something similar that 
is possible with systems using RRD (e.g. Ganlia).


Depending on the kind of metrics, like for those from the node 
exporter, one may want a very high sample resolution (and thus short 
scraping interval) for like the last 2 days,... but the further one 
goes back the less interesting those data becomes, at least in that 
resolution (ever looked a how much IO a server had 2 years ago per 15s)?


What one may however want is a rough overview of these metrics for 
those time periods longer ago, e.g. in order to see some trends.



For other values, e.g. the total used disk space on a shared 
filesystem or maybe a tape library, one may not need such high 
resolution for the last 2 days, but therefore want the data (with low 
sample resolution, e.g. 1 sample per day) going back much longer, like 
the last 10 years.



With Ganglia/RRD it one would then simply use multiple RRDs, each for 
different time spans and with different resolutions... and RRD would 
interpolate it's samples accordingly.



Can anything like this be done with Prometheus? Or is that completely 
out of scope?



I saw that one can set the retention period, but that seems to affect 
everything.


So even if I have e.g. my low resolution tape library total size, 
which I could scrape only every hour or so, ... it wouldn't really 
help me.
In order to keep data for that like the last 10 years, I'd need to set 
the retention time to that.


But then the high resolution samples like from the node exporter would 
also be kept that long (with full resolution).


Prometheus itself cannot do downsampling, but other related projects 
such as Cortex & Thanos have such features.


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/f72674b0-a7de-ec89-955a-608b521cb754%40Jahingo.com.


[prometheus-users] fading out sample resolution for samples from longer ago possible?

2023-02-20 Thread Christoph Anton Mitterer
Hey.

I wondered whether one can to with Prometheus something similar that is 
possible with systems using RRD (e.g. Ganlia).

Depending on the kind of metrics, like for those from the node exporter, 
one may want a very high sample resolution (and thus short scraping 
interval) for like the last 2 days,... but the further one goes back the 
less interesting those data becomes, at least in that resolution (ever 
looked a how much IO a server had 2 years ago per 15s)?

What one may however want is a rough overview of these metrics for those 
time periods longer ago, e.g. in order to see some trends.


For other values, e.g. the total used disk space on a shared filesystem or 
maybe a tape library, one may not need such high resolution for the last 2 
days, but therefore want the data (with low sample resolution, e.g. 1 
sample per day) going back much longer, like the last 10 years.


With Ganglia/RRD it one would then simply use multiple RRDs, each for 
different time spans and with different resolutions... and RRD would 
interpolate it's samples accordingly.


Can anything like this be done with Prometheus? Or is that completely out 
of scope?


I saw that one can set the retention period, but that seems to affect 
everything.

So even if I have e.g. my low resolution tape library total size, which I 
could scrape only every hour or so, ... it wouldn't really help me.
In order to keep data for that like the last 10 years, I'd need to set the 
retention time to that.

But then the high resolution samples like from the node exporter would also 
be kept that long (with full resolution).


Thanks,
Chris.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/36e3506c-1fba-48e4-b3d9-ead908767cf2n%40googlegroups.com.


Re: [prometheus-users] metric_relabel_configs not dropping metrics

2023-02-20 Thread Stuart Clark

On 20/02/2023 23:14, Jihui Yang wrote:
I'm using prometheus-operator. It only allows loading 
addionalScrapeConfigs 
 to 
append to the end of the config file. The other config jobs were added 
as part of loading prometheus-operator. I'm not sure I can change those.


The other jobs are probably from PodMonitor & ServiceMonitor objects, so 
you'd need to adjust those.


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/410d4240-2f08-510b-3a75-49be47cd77b9%40Jahingo.com.


Re: [prometheus-users] metric_relabel_configs not dropping metrics

2023-02-20 Thread Jihui Yang
I'm using prometheus-operator. It only allows loading addionalScrapeConfigs 

 to 
append to the end of the config file. The other config jobs were added as 
part of loading prometheus-operator. I'm not sure I can change those.
On Monday, February 20, 2023 at 3:07:25 PM UTC-8 Stuart Clark wrote:

> On 20/02/2023 22:33, Jihui Yang wrote:
> > I think these metrics are being scraped from another job. What I want 
> > is to drop any scraped metrics with names match the regex I provided
> Then you need to add the relabel config to that other job.
>
> -- 
> Stuart Clark
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/1e691810-a17f-4651-bb44-1dda3f96bf8fn%40googlegroups.com.


Re: [prometheus-users] metric_relabel_configs not dropping metrics

2023-02-20 Thread Stuart Clark

On 20/02/2023 22:33, Jihui Yang wrote:
I think these metrics are being scraped from another job. What I want 
is to drop any scraped metrics with names match the regex I provided

Then you need to add the relabel config to that other job.

--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/354ed09b-b32c-4352-e786-cdca89c56a21%40Jahingo.com.


Re: [prometheus-users] metric_relabel_configs not dropping metrics

2023-02-20 Thread Jihui Yang
I think these metrics are being scraped from another job. What I want is to
drop any scraped metrics with names match the regex I provided

On Mon, Feb 20, 2023, 2:16 PM Stuart Clark  wrote:

> On 20/02/2023 19:10, Jihui Yang wrote:
> > Hi, so I added this section to match all namespaces:
> > ```
> > kubernetes_sd_configs:
> >  - role: endpoints kubeconfig_file: ""
> >follow_redirects: true
> >namespaces:
> >  names:
> >  - example1
> >  - example2
> >  - example3
> > ```
> > as well as
> > ```
> > authorization:
> >  type: Bearer
> >  credentials_file: /var/run/secrets/
> kubernetes.io/serviceaccount/token
> > ```
> > I turned on debug logging, and i'm getting
> > ```
> > ts=2023-02-20T19:08:30.169Z caller=scrape.go:1292 level=debug
> > component="scrape manager" scrape_pool=drop_response_metrics
> > target=http://10.10.188.252:25672/metrics msg="Scrape failed" err="Get
> > \"http://10.10.188.252:25672/metrics\": EOF"
> > ts=2023-02-20T19:08:30.465Z caller=scrape.go:1292 level=debug
> > component="scrape manager" scrape_pool=drop_response_metrics
> > target=http://10.10.152.96:10043/metrics msg="Scrape failed"
> > err="server returned HTTP status 500 Internal Server Error"
> > ts=2023-02-20T19:08:30.510Z caller=scrape.go:1292 level=debug
> > component="scrape manager" scrape_pool=drop_response_metrics
> > target=http://10.10.241.97:9100/metrics msg="Scrape failed"
> > err="server returned HTTP status 400 Bad Request"
> > ```
> >
> > The metrics are still not dropped
>
> I'm not really following exactly what your config is.
>
> Those errors suggest that at least some of the scrapes are failing.
>
> When you say "the metrics are still not dropped" are these metrics that
> are being scraped in this job?
>
> --
> Stuart Clark
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAG0H%2BQjT1rTguPLJGHBrrRyT9MAXjHbcDrs95_Ss6VMfVie%2BWw%40mail.gmail.com.


Re: [prometheus-users] metric_relabel_configs not dropping metrics

2023-02-20 Thread Stuart Clark

On 20/02/2023 19:10, Jihui Yang wrote:

Hi, so I added this section to match all namespaces:
```
kubernetes_sd_configs:
 - role: endpoints kubeconfig_file: ""
   follow_redirects: true
   namespaces:
     names:
         - example1
         - example2
         - example3
```
as well as
```
authorization:
     type: Bearer
     credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
```
I turned on debug logging, and i'm getting
```
ts=2023-02-20T19:08:30.169Z caller=scrape.go:1292 level=debug 
component="scrape manager" scrape_pool=drop_response_metrics 
target=http://10.10.188.252:25672/metrics msg="Scrape failed" err="Get 
\"http://10.10.188.252:25672/metrics\": EOF"
ts=2023-02-20T19:08:30.465Z caller=scrape.go:1292 level=debug 
component="scrape manager" scrape_pool=drop_response_metrics 
target=http://10.10.152.96:10043/metrics msg="Scrape failed" 
err="server returned HTTP status 500 Internal Server Error"
ts=2023-02-20T19:08:30.510Z caller=scrape.go:1292 level=debug 
component="scrape manager" scrape_pool=drop_response_metrics 
target=http://10.10.241.97:9100/metrics msg="Scrape failed" 
err="server returned HTTP status 400 Bad Request"

```

The metrics are still not dropped


I'm not really following exactly what your config is.

Those errors suggest that at least some of the scrapes are failing.

When you say "the metrics are still not dropped" are these metrics that 
are being scraped in this job?


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/4bd680fe-400e-de75-d137-7c96e25a08e0%40Jahingo.com.


Re: [prometheus-users] metric_relabel_configs not dropping metrics

2023-02-20 Thread Jihui Yang
Hi, so I added this section to match all namespaces:
```
kubernetes_sd_configs: 
 - role: endpoints kubeconfig_file: "" 
   follow_redirects: true 
   namespaces: 
 names: 
 - example1
 - example2
 - example3 
```
as well as 
```
authorization: 
 type: Bearer 
 credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
```
I turned on debug logging, and i'm getting 
```
ts=2023-02-20T19:08:30.169Z caller=scrape.go:1292 level=debug 
component="scrape manager" scrape_pool=drop_response_metrics 
target=http://10.10.188.252:25672/metrics msg="Scrape failed" err="Get 
\"http://10.10.188.252:25672/metrics\": EOF"
ts=2023-02-20T19:08:30.465Z caller=scrape.go:1292 level=debug 
component="scrape manager" scrape_pool=drop_response_metrics 
target=http://10.10.152.96:10043/metrics msg="Scrape failed" err="server 
returned HTTP status 500 Internal Server Error"
ts=2023-02-20T19:08:30.510Z caller=scrape.go:1292 level=debug 
component="scrape manager" scrape_pool=drop_response_metrics 
target=http://10.10.241.97:9100/metrics msg="Scrape failed" err="server 
returned HTTP status 400 Bad Request"
```

The metrics are still not dropped
On Monday, February 20, 2023 at 7:14:23 AM UTC-8 Stuart Clark wrote:

> On 17/02/2023 22:02, Jihui Yang wrote:
>
> I'm using prometheus-operator's addionalScrapeConfigs 
> 
>  to 
> add metric drop rules. Example: 
>
> ```
> - job_name: drop_response_metrics 
>   honor_timestamps: true 
>   scrape_interval: 30s 
>   scrape_timeout: 10s 
>   metrics_path: /metrics 
>   scheme: http 
>   follow_redirects: true 
>   metric_relabel_configs: 
>   - source_labels: [__name__] 
> separator: ; 
> regex: 
> (response_total|response_latency_ms_count|response_latency_ms_sum) 
> replacement: $1 
> action: drop
> ```
>
> The config is successfully loaded to prometheus and I can view it in 
> `/config` endpoint. But for some reason I still can see the metrics. can 
> you let me know what to do?
>
> Is that the full config? I'm not seeing a Service Discovery section (e.g. 
> Kubernetes or file based) to tell Prometheus where to scrape from.
>
> -- 
> Stuart Clark
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/9b59f300-5446-42c2-bf31-cd964dab0c24n%40googlegroups.com.


Re: [prometheus-users] metric_relabel_configs not dropping metrics

2023-02-20 Thread Stuart Clark

On 17/02/2023 22:02, Jihui Yang wrote:
I'm using prometheus-operator's addionalScrapeConfigs 
 to 
add metric drop rules. Example:


```
- job_name: drop_response_metrics
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  metric_relabel_configs:
  - source_labels: [__name__]
    separator: ;
    regex: 
(response_total|response_latency_ms_count|response_latency_ms_sum)

    replacement: $1
    action: drop
```

The config is successfully loaded to prometheus and I can view it in 
`/config` endpoint. But for some reason I still can see the metrics. 
can you let me know what to do?


Is that the full config? I'm not seeing a Service Discovery section 
(e.g. Kubernetes or file based) to tell Prometheus where to scrape from.


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/797c0c2e-d0d2-8b1a-2067-305307d6ed2e%40Jahingo.com.


[prometheus-users] MySQL scraper for shared-instance database

2023-02-20 Thread Alexandra Hilzinger
Hi All

We have the situation that we have shared-instances, so a DB-host with 
different DBs on it from different projects. 
In our current setup we are only able to collect metrics from the whole 
instance with Prometheus.
However, we have the case that we need to break down the data to individual 
DBs. 
Is there an option with a scraper or similar?
I haven't really found anything on the internet. 

I am thankful for any advice. 

Kind Regards 
Alex

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/bd8c36fe-310d-4296-a13c-4b42d534ac4an%40googlegroups.com.


[prometheus-users] metric_relabel_configs not dropping metrics

2023-02-20 Thread Jihui Yang
I'm using prometheus-operator's addionalScrapeConfigs 

 to 
add metric drop rules. Example:

```
- job_name: drop_response_metrics 
  honor_timestamps: true 
  scrape_interval: 30s 
  scrape_timeout: 10s 
  metrics_path: /metrics 
  scheme: http 
  follow_redirects: true 
  metric_relabel_configs: 
  - source_labels: [__name__] 
separator: ; 
regex: 
(response_total|response_latency_ms_count|response_latency_ms_sum) 
replacement: $1 
action: drop
```

The config is successfully loaded to prometheus and I can view it in 
`/config` endpoint. But for some reason I still can see the metrics. can 
you let me know what to do?

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/c3022060-ca85-4fe7-a6d7-e124653d3ec2n%40googlegroups.com.