Re: [prometheus-users] Backfilling data into Prometheus

2022-04-17 Thread Stuart Clark

On 12/04/2022 23:06, John Grieb wrote:
I am backfilling a month's worth (March 1st to 31st, 2022) of Zabbix 
trend data (hourly avg values) for a single metric (gauge) with a 
single label (Hostname). There are 746 datapoints in my OpenMetrics 
file which I'm converting to TSDB format using the command:


promtool tsdb create-blocks-from openmetrics 30030360463_history.txt

When I move the data into the Prometheus storage directory the first 
15 day and 17 hours of data are removed for some reason. Can anyone 
tell me why and what I have to do to keep all the data?


What have you set your Prometheus retention period to? By default it is 
2 weeks.


--
Stuart Clark

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/30023ad3-bfcb-2c5a-07b8-0e9f2b574174%40Jahingo.com.


[prometheus-users] Migrate old time series to new time series in prometheus database

2022-04-17 Thread Amin Borjian
Hi. 

We are looking to rename some of the metrics name (metric X -> Y). However, 
we do not want lose old data that has already been collected and stored by 
Prometheus in TSDB. history of metric is important for us.

We are looking for the following way:
1) Change the name of the metric, as a result of which the metric will be 
saved with a new name from now on.
2) After the above change, change the name of the old metrics in the 
Prometheus database.

But we did not find a way for the second part. Is there no way for it in 
Promehteus? In this case, what should be done in our scenario?

Thank you for your help.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/ff915f6f-d9ee-4dd8-a0c0-2cd721df8792n%40googlegroups.com.


[prometheus-users] Prometheus container restarts while fetching last 24hrs metrics

2022-04-17 Thread nbada...@gmail.com
Hi Guyz,

I have been using Prometheus for few years with almost no issues, 
recently i am witnessing performance issues while spinning the queries via 
Grafana dashboard. 

Prometheus container sustains to server 12hrs of metrics without any 
issues, 
if i extend it to 24hrs of metrics then it gets restarted with the 
following info (  sorry for the long trail...  ).

Please, do see few manually ran query metrics at the end of the logs for 
the reference.

level=info ts=2022-04-13T05:15:17.798Z caller=main.go:851 
fs_type=EXT4_SUPER_MAGIC
level=info ts=2022-04-13T05:15:17.798Z caller=main.go:854 msg="TSDB started"
level=info ts=2022-04-13T05:15:17.798Z caller=main.go:981 msg="Loading 
configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2022-04-13T05:15:17.804Z caller=main.go:1012 msg="Completed 
loading of configuration file" filename=/etc/prometheus/prometheus.yml 
totalDuration=5.955772ms remote_storage=1.608µs web_handler=397ns 
query_engine=1.065µs scrape=569.244µs scrape_sd=637.764µs notify=24.31µs 
notify_sd=18.627µs rules=1.984671ms
level=info ts=2022-04-13T05:15:17.804Z caller=main.go:796 msg="Server is 
ready to receive web requests."
level=warn ts=2022-04-13T06:12:38.025Z caller=main.go:378 
deprecation_notice="'storage.tsdb.retention' flag is deprecated use 
'storage.tsdb.retention.time' instead."
level=info ts=2022-04-13T06:12:38.025Z caller=main.go:443 msg="Starting 
Prometheus" version="(version=2.28.1, branch=HEAD, 
revision=b0944590a1c9a6b35dc5a696869f75f422b107a1)"
level=info ts=2022-04-13T06:12:38.025Z caller=main.go:448 
build_context="(go=go1.16.5, user=x, date=20210701-15:20:10)"
level=info ts=2022-04-13T06:12:38.025Z caller=main.go:449 
host_details="(Linux 4.14.262-200.489.amzn2.x86_64 #1 SMP Fri Feb 4 
20:34:30 UTC 2022 x86_64  (none))"
level=info ts=2022-04-13T06:12:38.025Z caller=main.go:450 
fd_limits="(soft=32768, hard=65536)"
level=info ts=2022-04-13T06:12:38.025Z caller=main.go:451 
vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2022-04-13T06:12:38.026Z caller=query_logger.go:79 
component=activeQueryTracker msg="These queries didn't finish in 
prometheus' last run:" 
queries="[{\"query\":\"count(proftpd_cpu_usage{process=~\\\".*proftpd.*\\\",job=\\\"sftp_top_prod\\\",app!=\\\"-c\\\",app!=\\\"\\u003cdefunct\\u003e\\\",app!=\\\"connected:\\\"})
 
by (app) \\u003e 
5\",\"timestamp_sec\":1649830310},{\"query\":\"count(proftpd_cpu_usage{job=\\\"sftp_top_prod\\\",process=~\\\".*proftpd.*\\\",clientip!=\\\"-nd\\\",
 
app!=\\\"\\u003cdefunct\\u003e\\\"}) by (clientip,app) \\u003e 
5\",\"timestamp_sec\":1649830316},{\"query\":\"count(proftpd_cpu_usage{process=~\\\".*proftpd.*\\\",job=\\\"sftp_top_prod\\\"})\",\"timestamp_sec\":1649830310},{\"query\":\"count(proftpd_cpu_usage{process=~\\\".*proftpd.*\\\",job=\\\"sftp_top_prod\\\"})
 
by 
(instance)\",\"timestamp_sec\":1649830310},{\"query\":\"count(proftpd_cpu_usage{job=\\\"sftp_top_prod\\\",
 
process=~\\\".*proftpd.*\\\"}) by (clientip) \\u003e 
5\",\"timestamp_sec\":1649830311}]"
level=info ts=2022-04-13T06:12:38.028Z caller=web.go:541 component=web 
msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2022-04-13T06:12:38.029Z caller=main.go:824 msg="Starting 
TSDB ..."
level=info ts=2022-04-13T06:12:38.031Z caller=tls_config.go:191 
component=web msg="TLS is disabled." http2=false
level=info ts=2022-04-13T06:12:38.031Z caller=repair.go:57 component=tsdb 
msg="Found healthy block" mint=164170800 maxt=164229120 
ulid=01FSGDEHH869s5PYJ3HQZ8GEVNx5
level=info ts=2022-04-13T06:12:38.032Z caller=repair.go:57 component=tsdb 
msg="Found healthy block" mint=164229120 maxt=164287440 
ulid=01FT1SMMQZNXxAAEJS6NPDD32Sx3
.
level=warn ts=2022-04-13T06:12:38.048Z caller=db.go:676 component=tsdb 
msg="A TSDB lockfile from a previous execution already existed. It was 
replaced" file=/prometheus/lock
level=info ts=2022-04-13T06:12:40.121Z caller=head.go:780 component=tsdb 
msg="Replaying on-disk memory mappable chunks if any"
level=info ts=2022-04-13T06:12:43.525Z caller=head.go:794 component=tsdb 
msg="On-disk memory mappable chunks replay completed" duration=3.4045956s
level=info ts=2022-04-13T06:12:43.526Z caller=head.go:800 component=tsdb 
msg="Replaying WAL, this may take a while"
level=info ts=2022-04-13T06:12:48.512Z caller=head.go:826 component=tsdb 
msg="WAL checkpoint loaded"
level=info ts=2022-04-13T06:12:49.978Z caller=head.go:854 component=tsdb 
msg="WAL segment loaded" segment=133826 maxSegment=133854

level=info ts=2022-04-13T06:13:33.511Z caller=head.go:860 component=tsdb 
msg="WAL replay completed" checkpoint_replay_duration=4.986518678s 
wal_replay_duration=44.999189293s total_replay_duration=53.390369759s
level=info ts=2022-04-13T06:13:35.315Z caller=main.go:851 
fs_type=EXT4_SUPER_MAGIC
level=info ts=2022-04-13T06:13:35.315Z caller=main.go:854 msg="TSDB started"
level=info ts=2022-04-13T06:13:35.315Z caller=main.go:981 msg="Loading 

[prometheus-users] Backfilling data into Prometheus

2022-04-17 Thread John Grieb
I am backfilling a month's worth (March 1st to 31st, 2022) of Zabbix trend 
data (hourly avg values) for a single metric (gauge) with a single label 
(Hostname). There are 746 datapoints in my OpenMetrics file which I'm 
converting to TSDB format using the command:

promtool tsdb create-blocks-from openmetrics 30030360463_history.txt

When I move the data into the Prometheus storage directory the first 15 day 
and 17 hours of data are removed for some reason. Can anyone tell me why 
and what I have to do to keep all the data?

Regards,

John

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/2db1f3f5-060e-4836-b005-27b27ecefb6fn%40googlegroups.com.


Re: [prometheus-users] Forced to use the Pushgateway as a workaround?

2022-04-17 Thread Matthias Rampke
If you can, deploy (a) Prometheus into the cluster itself. The easiest way
to manage that is using the Prometheus operator, but if that is not
possible, you can configure it directly using relabeling, as in this
example[0].

This Prometheus can scrape the various targets. You have a few options from
there:

You can use this directly, reaching it through the load balancer, or
through a Grafana deployed to the same cluster.

Or use remote write to push to another Prometheus or other metric store. In
this case you can run the in-cluster Prometheus in the pared down agent
mode. This also works if you run e.g. one Prometheus per namespace.

On the more complex but full featured end, you can use Thanos to tie
multiple servers in multiple clusters together with a long term store.

What is appropriate for you depends on the size of your setup, what you
want monitor, and the restrictions that your admins impose. I hope this
gives you some pointers to discuss with them!

Best,
Matthias




[0]:
https://github.com/prometheus/prometheus/blob/main/documentation/examples/prometheus-kubernetes.yml

On Fri, Apr 15, 2022, 17:19 a...@binoklo.com  wrote:

> Hello,
>
> I'd like to use Prometheus to monitor my (wrapped) k8s services, but
> unfortunately the admin won't let me connect directly to individual pods, I
> can just access them via load balancing.
>
> In this case, I guess I have to use the Pushgateway. It seems to be
> working, however timeseries for old pods persist (the "instance" label is a
> random string). I am thinking of creating a program to periodically delete
> them from the Pushgateway.
>
> This is not an ideal situation but I guess this is the best I can do. Or
> am I missing something?
>
> Thanks,
>
> --
> Adriano
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/e009b2af-53e3-4c43-8d3c-64b3cc29041bn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAMV%3D_gbxxQCcF_k0qoagRwTxP4KJLd3Kd8fEVjvXLitgEpSwJA%40mail.gmail.com.