[prometheus-users] How to push metrics to prometheus api/v1/write API endpoint with CURL
Here is a set of correct metrics cat metrics.prom # HELP http_requests_total The total number of HTTP requests. # TYPE http_requests_total counter http_requests_total{method="post",code="200"} 1027 1395066363000 http_requests_total{method="post",code="400"}3 1395066363000 cat metrics.prom | promtool check metrics Then it is supposed to be compressed by snappy as the manual said The read and write protocols both use a snappy-compressed protocol buffer encoding over HTTP. So, snzip metrics.prom Then curl --header "Content-Type: application/openmetrics-text" \ --header "Content-Encoding: snappy" \ --request POST \ --data-binary "@metrics.prom.sz" \ "http://localhost:9090/api/v1/write"; but unfortunately, the result is snappy: corrupt input Why is it corrupt? snzip -d metrics.prom.sz gives perfectly fine result. -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/aebacc17-b38b-4c96-9b03-9df5219c8859n%40googlegroups.com.
[prometheus-users] Can not get thanos data from obj storage older then Jun 2021
Even if there is data back from 01-10-2020, f.x. 1hr downsampled, but also 5 min | 01FABEP0XV9GS77EGYAR97CMDH | 01-10-2020 00:00:00 | 15-10-2020 00:00:00 | 336h0m0s | - | 48,880 | 863,190 | 174,406 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FBF0C3BQMA4ST2WX2CF03DEG | 15-10-2020 00:00:00 | 29-10-2020 00:00:00 | 336h0m0s | - | 48,224 | 915,661 | 181,497 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD014XH4T8F472CW4842TZP5 | 29-10-2020 00:00:00 | 12-11-2020 00:00:00 | 336h0m0s | - | 45,198 | 1,033,391 | 179,508 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD1FSRNZVWMP3YCXZZYZ2SD2 | 12-11-2020 00:00:00 | 26-11-2020 00:00:00 | 336h0m0s | - | 50,516 | 1,051,331 | 181,850 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD2J95QQG3NWJMQZ1RAZ6Q61 | 26-11-2020 00:00:00 | 10-12-2020 00:00:00 | 336h0m0s | - | 49,333 | 1,053,514 | 181,927 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD39X6CQ6H6PMWHN9WBCD3HY | 10-12-2020 00:00:00 | 24-12-2020 00:00:00 | 336h0m0s | - | 47,272 | 1,102,488 | 187,982 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD3RKJ51501ZPHATDXBY6NR2 | 24-12-2020 00:00:00 | 07-01-2021 00:00:00 | 336h0m0s | - | 47,251 | 1,115,376 | 188,117 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD4HD8K24JF5Q5BRVTW80HEG | 07-01-2021 00:00:00 | 21-01-2021 00:00:00 | 336h0m0s | - | 57,552 | 1,048,922 | 148,963 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD5R66WZC0VYKZVAH5RQDVXA | 21-01-2021 00:00:00 | 04-02-2021 00:00:00 | 335h59m59.955s | - | 50,163 | 1,058,576 | 189,189 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD5Z68F2B4EV6RE3GE4NT4WR | 04-02-2021 00:00:00 | 18-02-2021 00:00:00 | 335h59m59.955s | - | 64,254 | 1,068,063 | 213,889 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD6RF3B26J69PR9YM76JYZ7H | 18-02-2021 00:00:00 | 04-03-2021 00:00:00 | 335h59m59.955s | - | 46,733 | 1,062,276 | 179,469 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD77NSZQ179TT9C5HRQKBJM2 | 04-03-2021 00:00:00 | 18-03-2021 00:00:00 | 335h59m59.955s | - | 45,255 | 1,089,270 | 178,929 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD7YP2NG2FK2VWNDXVPR3XAH | 18-03-2021 00:00:00 | 01-04-2021 00:00:00 | 335h59m59.955s | - | 52,437 | 1,101,873 | 177,873 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD8BXQ4H28R9P6J3C671AV4A | 01-04-2021 00:00:00 | 15-04-2021 00:00:00 | 335h59m59.955s | - | 45,076 | 1,080,213 | 178,812 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD8HM0S0ZYE6C7W2C8TCDNDE | 15-04-2021 00:00:00 | 29-04-2021 00:00:00 | 335h59m59.955s | - | 45,452 | 1,095,988 | 179,438 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD8S8TNQQRGSBYM920TYTM4R | 29-04-2021 00:00:00 | 13-05-2021 00:00:00 | 335h59m59.955s | - | 50,074 | 1,112,143 | 180,461 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD8Z08ANVGKBS5MDY4S4YXGE | 13-05-2021 00:00:00 | 27-05-2021 00:00:00 | 335h59m59.954s | - | 56,630 | 1,108,817 | 187,615 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FD94BM998DD2DMNXP78R2J10 | 27-05-2021 00:00:00 | 10-06-2021 00:00:00 | 335h59m59.954s | - | 50,917 | 1,112,315 | 183,544 | 5 | false | monitor=production,pod=webbackuppod1 | 1h0m0s | compactor | | 01FM366WZ8QDKK1EJ8GWCGJM6G | 10-0
[prometheus-users] Re: Alertmanager route match on mute_time_interval
That's a wonderful find, but still it is very much difficult to do a proper routing when you need to put f.x. watch schedule for 3 people to alert manager. Slightly more complex if not a binary choice... On Wednesday, June 30, 2021 at 4:02:21 AM UTC+2 benri...@gmail.com wrote: > Glad you managed to resolve your issue xkilian! > > Just for some context, the mute_time_intervals mechanism is entirely > separate to routes and route matching. All it does is 'silence' the route > from sending out alerts in the specified periods. > > The pattern you've discovered of using multiple receivers with the > continue option is, I believe, the simplest way to achieve what you wanted. > > Cheers, > Ben > > On Wednesday, June 30, 2021 at 5:42:01 AM UTC+10 xkilian wrote: > >> Ok, I will answer myself. >> >> The way to do this is with continue, continue will permit firing to more >> than one receiver. I had mistakenly thought that continue would simply >> continue without firing at first match. Alertmanager is deceptively >> powerful. The only missing piece is a piece of UI to manage the list of >> pager numbers for on-call rotation that can now be defined in the >> alertmanager (offhoursgroup). >> >> # Select receiver based on time of day >> - match: >> severity: 5 >> receiver: daygroup >> mute_time_interval: offhours >> continue >> - match: >> severity: 5 >> receiver: offhoursgroup >> mute_time_interval: workhours >> >> Then I just need to manage/automate who is in the offhoursgroup based on >> the rotation schedule. >> >> This would work for my purposes. >> Le mardi 29 juin 2021 à 14:14:42 UTC-4, xkilian a écrit : >> >>> >>> Can a route use a mute_time_interval as a route matcher criteria? >>> >>> routes: >>> >>> # Select receiver based on time of day >>> - match: >>> severity: 5 >>> receiver: daygroup >>> routes: >>> - match: >>> mute_time_interval: offhours >>> receiver: offhoursgroup >>> >>> >>> (Not sure the indentation is perfect) >>> >>> Thank you, >>> >>> xkilian >>> >> -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/8309b41e-401e-4fb0-bd8b-a9bd096c254an%40googlegroups.com.
[prometheus-users] postgres ssl cert monitoring
Hi, Is there any way to get metrics out of postgres ssl certs? AFAIK black box exporter can not do it because of the way postgres doing starttls. /Ihor -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/4b425fda-a575-4c6f-970a-ae0f44b9e98fn%40googlegroups.com.