Re: [prometheus-users] Prometheus metrics where change interval and scrape interval are quite different

2023-01-18 Thread Mark Selby
Thanks very much for taking the time out to reply. Indeed this is an 
example of having a hammer and seeing everything as nail. I do need a 
system to deal with event data and I will probably go with a Postgres 
solution. Luckily I have other needs for Postgres so this is not as 
heavyweight as it would be just for this use.

On Wednesday, January 18, 2023 at 6:01:30 AM UTC-8 juliu...@promlabs.com 
wrote:

> Hi Mark,
>
> That is indeed not directly possible with PromQL (though you could pull 
> the data out of course), since Prometheus and PromQL are very decidedly 
> about metrics and not about tracking individual events. So you'll either 
> need an event processing system for this, or formulate the problem in a 
> different way so that it works better with metrics. What is it that you 
> want to do based on the data in the end (e.g. alert on some condition)? 
> Maybe there's a better, Prometheus-compatible pattern that we can suggest.
>
> Also, given your current data, is it actually possible for two runs of the 
> same job to produce the same sample value, so you wouldn't even be able to 
> distinguish them anyway?
>
> Regards,
> Julius
>
> On Wed, Jan 18, 2023 at 2:00 PM Mark Selby  wrote:
>
>> I am struggling with PromQL over an issue dealing with a metric that 
>> changes less frequently than the scrape interval. I am trying to use 
>> Prometheus as a pseudo event tracker and hoping to get some advice on how 
>> to best try and accomplish my goal.
>>
>> I have a random job that runs at different intervals depending on the 
>> situation. Some instances of the job run every five minutes and some run 
>> only once an hour or once a day. The job created a node_exporter textfile 
>> snippet that gets scraped on 30 second interval.
>>
>> Below is an example of a metric that changes only every five minutes with 
>> the lesser scrape interval. In this scenario all the points with same value 
>> are from the same job run. I really only care about one of those.
>>
>> I have no way to know what the interval is between set for all my 
>> different jobs. All I know is that when the value changes, a new set is in 
>> play.
>>
>> What I want to do in "reduce" my dataset to deal with only distinct 
>> values. I want to collapse these 27 entries into 3 by taking either the 
>> first or last value of each "set".
>>
>> I can not find a PromQL function/operator that does what I want. Maybe I 
>> need to use recording rules?
>>
>> All and any help is greatly appreciated.
>>
>> metric_name{instance="hostname.example.net", job="external/generic", 
>> mode="pull", name="snafu"}
>>
>> 9973997301 <http://voice.google.com/calls?a=nc,%2B19973997301>  
>> <http://voice.google.com/calls?a=nc,%2B19973997301>@1673997343.774 
>> 9973997301  <http://voice.google.com/calls?a=nc,%2B19973997301> 
>> <http://voice.google.com/calls?a=nc,%2B19973997301> 
>> <http://voice.google.com/calls?a=nc,%2B19973997301> 
>> <http://voice.google.com/calls?a=nc,%2B19973997301> 
>> <http://voice.google.com/calls?a=nc,%2B19973997301> 
>> <http://voice.google.com/calls?a=nc,%2B19973997301> 
>> <http://voice.google.com/calls?a=nc,%2B19973997301> 
>> <http://voice.google.com/calls?a=nc,%2B19973997301>@1673997373.764 
>> 9973997301 @1673997403.764 9973997301 @1673997433.764 9973997301 
>> @1673997463.764 9973997301 @1673997493.764 9973997301 @1673997523.764 
>> 9973997301 @1673997553.764 9973997301 @1673997583.764
>>
>> 9973997601 <http://voice.google.com/calls?a=nc,%2B19973997601>  
>> <http://voice.google.com/calls?a=nc,%2B19973997601>@1673997613.764 
>> 9973997601  <http://voice.google.com/calls?a=nc,%2B19973997601> 
>> <http://voice.google.com/calls?a=nc,%2B19973997601> 
>> <http://voice.google.com/calls?a=nc,%2B19973997601> 
>> <http://voice.google.com/calls?a=nc,%2B19973997601> 
>> <http://voice.google.com/calls?a=nc,%2B19973997601> 
>> <http://voice.google.com/calls?a=nc,%2B19973997601> 
>> <http://voice.google.com/calls?a=nc,%2B19973997601> 
>> <http://voice.google.com/calls?a=nc,%2B19973997601>@1673997643.764 
>> 9973997601 @1673997673.764 9973997601 @1673997703.774 9973997601 
>> @1673997733.764 9973997601 @1673997763.764 9973997601 @1673997793.764 
>> 9973997601 @1673997823.764 9973997601 @1673997853.863
>>
>> 9973997901 <http://voice.google.com/calls?a=nc,%2B19973997901>  
>> <http://voice.google.com/calls?a=nc,%2B19973997901>@1673997

[prometheus-users] Prometheus metrics where change interval and scrape interval are quite different

2023-01-18 Thread Mark Selby
I am struggling with PromQL over an issue dealing with a metric that 
changes less frequently than the scrape interval. I am trying to use 
Prometheus as a pseudo event tracker and hoping to get some advice on how 
to best try and accomplish my goal.

I have a random job that runs at different intervals depending on the 
situation. Some instances of the job run every five minutes and some run 
only once an hour or once a day. The job created a node_exporter textfile 
snippet that gets scraped on 30 second interval.

Below is an example of a metric that changes only every five minutes with 
the lesser scrape interval. In this scenario all the points with same value 
are from the same job run. I really only care about one of those.

I have no way to know what the interval is between set for all my different 
jobs. All I know is that when the value changes, a new set is in play.

What I want to do in "reduce" my dataset to deal with only distinct values. 
I want to collapse these 27 entries into 3 by taking either the first or 
last value of each "set".

I can not find a PromQL function/operator that does what I want. Maybe I 
need to use recording rules?

All and any help is greatly appreciated.

metric_name{instance="hostname.example.net", job="external/generic", 
mode="pull", name="snafu"}

9973997301   
@1673997343.774 
9973997301   
 
 
 
 
 
 
@1673997373.764 
9973997301 @1673997403.764 9973997301 @1673997433.764 9973997301 
@1673997463.764 9973997301 @1673997493.764 9973997301 @1673997523.764 
9973997301 @1673997553.764 9973997301 @1673997583.764

9973997601   
@1673997613.764 
9973997601   
 
 
 
 
 
 
@1673997643.764 
9973997601 @1673997673.764 9973997601 @1673997703.774 9973997601 
@1673997733.764 9973997601 @1673997763.764 9973997601 @1673997793.764 
9973997601 @1673997823.764 9973997601 @1673997853.863

9973997901   
@1673997913.764 
9973997901   
 
 
 
 
 
 
@1673997943.767 
9973997901 @1673997973.764 9973997901 @1673998003.764 9973997901 
@1673998033.764 9973997901 @1673998063.764 9973997901 @1673998093.764 
9973997901 @1673998123.764 9973997901 @1673998153.764

I have tried many of the PromQL functions/operators to try and reduce my 
sets. The count_vaules() operator is the closest I have come but that works 
only with instant vectors not range vectors.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/b3882dc1-32ea-42ff-9264-3dcbb72f662dn%40googlegroups.com.


[prometheus-users] Time series with change interval much less than scrape interval

2023-01-18 Thread Mark Selby
I am struggling with PromQL over an issue dealing with a metric that
changes less frequently than the scrape interval. I am trying to use
Prometheus as a pseudo event tracker and hoping to get some advice on
how to best try and accomplish my goal.

I have a random job that runs at different intervals depending on the
situation. Some instances of the job run every five minutes and some run
only once an hour or once a day. The job creats a node_exporter
textfile snippet that gets scraped on 30 second interval.

Below is an example of a metric that changes only every five minutes with
the lesser scrape interval. In this scenario all the points with same
value are from the same job run. I really only care about one of those.

I have no way to know what the interval is between sets for all my
different jobs. All I know is that when the value changes, a new set is
in play.

What I want to do in "reduce" my dataset to deal with only distinct
values. I want to collapse these 27 entries below into 3 by taking either
the first or last value of each "set".

I can not find a PromQL function/operator that does what I want. Maybe I
need to use recording rules?

All and any help is greatly appreciated.































*metric_name{instance="hostname.example.net", job="external/generic", 
mode="pull", name="snafu"}9973997301 
@1673997343.7749973997301 
@1673997373.7649973997301 @1673997403.7649973997301 
@1673997433.7649973997301 @1673997463.7649973997301 
@1673997493.7649973997301 @1673997523.7649973997301 
@1673997553.7649973997301 @1673997583.7649973997601 
@1673997613.7649973997601 @1673997643.7649973997601 
@1673997673.7649973997601 @1673997703.7749973997601 
@1673997733.7649973997601 @1673997763.7649973997601 
@1673997793.7649973997601 @1673997823.7649973997601 
@1673997853.8639973997901 @1673997913.7649973997901 
@1673997943.7679973997901 @1673997973.7649973997901 
@1673998003.7649973997901 @1673998033.7649973997901 
@1673998063.7649973997901 @1673998093.7649973997901 
@1673998123.7649973997901 @1673998153.764*

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/99bec6df-8c64-4cb2-95e7-f7673418ce25n%40googlegroups.com.