uot;} 0
>
> On Wednesday, 3 May 2023 at 21:51:53 UTC+1 Brian Candler wrote:
>
>> Is it a CounterVec you're using?
>>
>> If so, I think that v.With(labels...) should be sufficient to initialise
>> it.
>>
>> On Wednesday, 3 May 2023 at 20:52:57 UTC+1 Johny
gt; You'll have to initialise the counters explicitly. If the first time the
> client library knows about the counter is when you increment it, then
> clearly it won't be able to export it until then.
>
> On Wednesday, 3 May 2023 at 20:19:27 UTC+1 Johny wrote:
>
>> I need to sum tw
I need to sum two separate counter metrics capturing request failures to
compute ratio of error requests for an alerting signal. The code
initializing and setting these counters sits in separate modules preventing
reuse of one counter.
The problem is when one of the counter is never
Also, all the problems are in the DBs (backend prometheus) not front end.
On Wednesday, April 5, 2023 at 4:50:39 PM UTC-4 Johny wrote:
> Prometheus version is 2.39.1
>
> There are many users and some legacy clients that add friction to changing
> queries across the board.
> D
ue"*} from your queries? After all,
> you're already thinking about removing this label at ingestion time, and if
> you do that, you won't be able to filter on it anyway.
>
> On Wednesday, 5 April 2023 at 18:50:02 UTC+1 Johny wrote:
>
>> The count of time series/metric fo
quot;.."}
>> my_series{l1="..", l2=".."}
>> should perform almost identically, as they will select the same subset of
>> timeseries.
>>
>> On Wednesday, 5 April 2023 at 17:42:33 UTC+1 Johny wrote:
>>
>>> There is a perf
_read subsection, or elsewhere?
thanks,
Johny
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to prometheus-users+unsubscr...@googlegroups.com.
Monday, 3 April 2023 at 15:14:49 UTC+1 Johny wrote:
>
>> In terms of samples fetched from the DB which is a cost limiting factor
>> in our set up, what is the overhead of rate() compared to increase(). Based
>> on my reading so far, rate() requires all data points within th
Johny
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web vi
experimentally, I'm able to read data from downstream Prometheus instances
with 3-4 levels between them. For example, consider this for simplicity
PromA *-remote-read> *PromB *remote-read->*
ExternalStorage
I am able to get data points in PromA that are fetched remotely from
I've a peculiar use case where there are large number of time series
published with a label set, that is causing overhead at query time when the
filter includes shared label value. e.g.
my_metric1{l1="fixed",l2="..",l3=".."}
my_metric2{l1="fixed",l2="..",l3=".."}
On Sun, Aug 7, 2022 at 9:46 PM Ben Kochie wrote:
>
>> Right, but more basic, how do you get this information from the
>> application right now? Are you reading logs? Does it emit statsd data?
>>
>> You're saying what, but not how.
>>
>> On Sun, Aug 7, 2022 a
Prometheus?
On Sunday, August 7, 2022 at 2:18:42 PM UTC-4 Stuart Clark wrote:
> On 07/08/2022 18:14, Johny wrote:
> > Gauge contains most recent values of a metric, sampled every 1 min or
> > so, and exported by a user application, e.g. some latency sampled at 1
> > minute
data to create these histograms?
>
> On Sun, Aug 7, 2022 at 9:23 AM Johny wrote:
>
>> We are migrating telemetry backend from legacy database to Prometheus and
>> require estimating percentiles on gauge metrics published by user
>> applications. Estimating percentiles on a gaug
We are migrating telemetry backend from legacy database to Prometheus and
require estimating percentiles on gauge metrics published by user
applications. Estimating percentiles on a gauge metric in Prometheus is not
feasible and for a number of reasons, client applications will be difficult
to
wrote:
> If you're producing metrics into the future, I would recommend using the
> remote write receiver model, rather than the scrape model. It handles this
> kind of use case much more easily.
>
> On Tue, Jun 28, 2022 at 6:28 PM Johny wrote:
>
>> we are int
we are integrating a legacy system, that allows storing metrics with
future timestamps, into our centralized prometheus instance. The use case
here is that the forecast series is generated from external machine
learning models and used for capacity planning, anomaly detection and
stress
thanks!
On Wednesday, June 1, 2022 at 10:48:12 AM UTC-4 Julien Pivotto wrote:
> If follow_redirect is true, it should work. This is the default value.
>
> Le mer. 1 juin 2022, 16:35, Johny a écrit :
>
>> We use a custom remote read adapter for reading time series data fr
We use a custom remote read adapter for reading time series data from our
storage backend. In process of transitioning to different backend, I want
to temporarily redirect selected remote read requests (filter on date) to
the new remote read adapter. Does Prometheus remote read support
When time series label values are substituted for new ones regularly and
the old series becomes active with no new data points. Keeping ingestion
rate constant, what is the performance impact of high churn rate on
ingestion, querying, compaction, and other operations.
For example,
This label is set to 'true' on all master-eligible nodes and it gets the
metrics for just the master nodes, not for other nodes.
On Sunday, September 12, 2021 at 8:50:01 PM UTC-4
san...@showmethesource.org wrote:
>
>
> On 10.09.21 21:01, Johny wrote:
> > I am planning to run
I am setting up monitoring set up for Elastic Search using Prometheus. I am
running into one hurdle and need some help in making a design decision.
I am planning to run metric exporter on all master-eligible nodes in
Elastic Search (3) for resiliency and also because masters have low
Thats odd. I confirmed remote read is returning future data points to
Prometheus. But I can't see the data points in Prom UI. The series is
truncated to 'now'.
On Monday, May 10, 2021 at 7:44:20 AM UTC-4 Julien Pivotto wrote:
> On 07 May 09:42, Johny wrote:
> > Hi,
> > Could
Hi,
Could you please look into this? I just want to understand PromQL behavior
with future timestamps.
On Monday, May 3, 2021 at 1:47:52 AM UTC-4 Johny wrote:
> Prometheus remote read -
> https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_read
> (
2, 2021 at 5:01:44 PM UTC-4 Julien Pivotto wrote:
> On 02 May 12:57, Johny wrote:
> > I am reading time series from a remote backend via Open Telemetry remote
> > read API. It works for most cases except future time series (data points
> in
> > future). I can see the da
I am reading time series from a remote backend via Open Telemetry remote
read API. It works for most cases except future time series (data points in
future). I can see the data points (+1 or 2 years from now) are being
returned from remote backend but Prometheus is not rendering them to users.
Question on cortex -- did not find a separate forum.
There is a basic helm chart for cortex deployment on kubernetes here
- https://github.com/cortexproject/cortex-helm-chart
However, there is little to no documentation on template variables and its
difficult to configure deployment
Thanks.
On Wednesday, February 17, 2021 at 4:00:59 PM UTC-5 Stuart Clark wrote:
> On 17/02/2021 20:20, Johny wrote:
> > Thanks. THis is helpful.
> >
> > From performance standpoint, is there a difference in having a 1
> > metric with 100x cardinality vs 10 m
Thanks. THis is helpful.
>From performance standpoint, is there a difference in having a 1 metric
with 100x cardinality vs 10 metrics with 10x cardinality?
On Wednesday, February 17, 2021 at 2:18:06 PM UTC-5 Stuart Clark wrote:
> On 17/02/2021 18:48, Johny wrote:
> > Is t
Is there a limit on the number of time series per metric (cardinality)? My
metric has cardinality of 100,000 with a tag for each process in my
infrastructure. I was wondering if this causes performance or other issues.
I couldn't find official guidance on this.
--
You received this message
I am using 2.17.2. Does 'remote_recent' supposed to work for such cases?
Basically I want to disable reading from local db, all time series should
be fetched from remote backend.
On Thursday, February 11, 2021 at 2:41:05 PM UTC-5 Julien Pivotto wrote:
> On 11 Feb 10:37, Johny wrote:
>
in
remote read store thats causing duplicates in my case.
thanks
Johny
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to prometheus-user
31, 2021 at 2:38:41 PM UTC-5 juliu...@promlabs.com wrote:
> On Sun, Jan 31, 2021 at 7:08 PM Johny wrote:
>
>> Please not that I am doing a remote read to fetch metrics in my rules.
>> Per my metrics, no issues with consistency in remote reads.
>
>
> Oh, that's inte
Please not that I am doing a remote read to fetch metrics in my rules. Per
my metrics, no issues with consistency in remote reads.
On Sunday, January 31, 2021 at 1:04:08 PM UTC-5 Johny wrote:
> Unfortunately, I am still seeing the issue even after removing all the
> filters!
>
causing this behavior?
On Friday, January 29, 2021 at 5:04:10 PM UTC-5 Julien Pivotto wrote:
>
> Johny,
>
> Do you have new lines in your label value?
> Can you try:
>
> label4=~"(?s:.+)"
>
> Are you using the latest release?
>
> Regards,
>
>
learly have label4 values. Why is
this failing then?
On Friday, January 29, 2021 at 3:17:29 PM UTC-5 Johny wrote:
> I'm having a consistent problem wherein some recorded metrics are missing
> randomly.
>
> The underlying expression evaluates to around 50-100 time series. When I
> filter t
I'm having a consistent problem wherein some recorded metrics are missing
randomly.
The underlying expression evaluates to around 50-100 time series. When I
filter the expression to produce fewer number of time series (5-10), then
the recorded metrics are available and I don't see the issue.
Yes sorry I meant labels. e.g. metric_host="193.44"
If you're talking about labels, then you could:
- modify the exporter to work in the way you want it to (for example, add a
new label saying what data centre it is running in)
> its a third party exporter and its not feasible to modify it.
I've exporters for some components such as Redis that give IP addresses in
values. I need to be able to map IP addresses to actual host names for my
query and alert conditions, e.g. to verify master-slave in Redis are in
different data centers.
How can I fetch metric in prometheus to map a
t; before they are deleted.
>
> You probably just need to wait for the maxTime on the oldest block to
> expire. Look in the meta.json in the TSDB block directories.
>
> On Sun, Sep 13, 2020 at 3:35 AM Johny wrote:
>
>> I am reducing data retention from 20 days to 10 days in
I am reducing data retention from 20 days to 10 days in my Prometheus nodes
(v. 2.17). When I change *storage.tsdb.retention.time *to 10d and restart
my instances, this does not get delete data older than 10 days. Is there a
command to force cleanup?
In general, what is best practice to delete
I have two promethues instances scraping the same target for resiliency.
Recently found this is causing issue with counters. At times, api instance
(remote reads all prom stores) shows decrease in counter values. I think its
because of race condition with the two instances — if one instance
I cannot change configuration of PD specific AM as its owned by different team.
My objective is just send PD alerts (severity=pager) to it. The default AM
owned by us gets all alerts.
Also if I drop the PD alert with relabel config, wont it get dropped entirely
and not routed to any
i have a requirement which requires routing only alerts for pagerduty to a
specific alert manager mesh due to proxy set up. All other alerts are sent to
default alertmanager. Is it possible to do this conditional routing in
prometheus alerting configuration?
Alternatively is it possible to
I just realized the issue is with counter decreasing at times and losing
monotonicity. Thanks.
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
There is one data point in series where rate/irate/increase gives an
unexpected spike. underlying series has 2 points with a small increase.
*timeseries irate=irate(series[1m]) *12:15:00
300,000
12:14:58400,000
12:14:50
In this case, I am doing a remote read.
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to prometheus-users+unsubscr...@googlegroups.com.
To view this
I have an alert condition based on count of metric value > 0 in last 10 min
below a certain threshold.
count_over_time((mymetric{..} > 0)[10m:1m]) > x
Problem with this query is it evaluates each min in last 10 min and returns
total of 5 even if there was a single data point in db — because of
a changes over whatever time period you are viewing.
>
> On 24 June 2020 16:14:03 BST, Johny >
> wrote:
>>
>> Thank you. That makes sense.
>>
>> More a grafana question - is there a way plot actual data points in a
>> graph (not table). Want users to s
Thanks. But how do you explain the 5 minute batches?
On Wednesday, June 24, 2020 at 6:20:53 AM UTC-4, Brian Candler wrote:
>
> I can explain it. query_range runs an instant query across the range, at
> the interval you specify. The value at time T is whatever the most recent
> value was
the http api called by grafana is showing data that is hard to explain.
step size sent from grafana is 15 seconds.
i only have two points in underlying time series data:
10:01:14 5.5
10:31:30. 10.4
when I query a time range (query_range) inclusive of these points, I get
two 5 min batches
I am actually using version 2.17.2 now. Appreciate if you could inform me
of any such issues in this version.
On Saturday, June 20, 2020 at 12:08:44 PM UTC-4, Johny wrote:
>
> Thanks everyone. Upgrading to Prometheus 2.18.2 fixes the issue.
>
>
> On Saturday, June 20, 2020 at 11
Thanks everyone. Upgrading to Prometheus 2.18.2 fixes the issue.
On Saturday, June 20, 2020 at 11:42:37 AM UTC-4, Christian Hoffmann wrote:
>
> On 6/20/20 5:31 PM, Johny wrote:
> > If it is non-compliant endpoint, the problem should appear in both
> > versions, isn't it? It
for "my_metric" from a 2.18 Prometheus, I don't see
>> this problem.
>>
>> Would it be feasible to share a minimal /metrics endpoint example that
>> reproduces this behavior in 2.18?
>>
>> On Sat, Jun 20, 2020 at 3:55 AM Johny >
>> wrote:
>>
>
new version is 2.18.1
On Saturday, June 20, 2020 at 4:48:35 AM UTC-4, Brian Candler wrote:
>
> Which exact version? Have you tried 2.18.2? There were some bugs fixed
> between 2.18.0 and 2.18.2.
>
--
You received this message because you are subscribed to the Google Groups
"Prometheus
I recently upgraded Prometheus version 2.4 to 2.18. It seems there is some
incompatibility in the api. When I query a metric, I get same value across
all time series in 2.18.
e.g.
query: my_metric
2.4 api-
Element Value
56 matches
Mail list logo