Hi Nina,

if you run multiple HA replicas of Prometheus and one of them becomes
unavailable for some reason and you query that broken replica, the queries
will indeed fail. You could either load-balance (with dead backend
detection) between the replicas to avoid this, or use something like Thanos
(https://thanos.io/) to aggregate over multiple HA replicas and merge /
deduplicate their data intelligently, even if one of the replicas is dead.

Regarding data consistency: two HA replicas do not talk to each other (in
terms of clustering) and just independently scrape the same data, but at
slightly different phases, so they will never contain 100% the same data,
just conceptually the same. Thus if you naively load-balance between two HA
replicas without any further logic, you will see your e.g. Grafana graphs
jump around a tiny bit, depending on which replica you are currently
scraping through the load balancer, and when exactly it scraped some
target. But other than that, you shouldn't really care, both replicas are
"correct", so to say.

For autoscaling on Kubernetes, take a look at the Prometheus Adapter (
https://github.com/kubernetes-sigs/prometheus-adapter), which you can use
together with the Horizonal Pod Autoscaler to do autoscaling based on
Prometheus metrics.

Regards,
Julius

On Fri, Jun 4, 2021 at 9:25 AM nina guo <ninaguo0...@gmail.com> wrote:

> Thank you very much.
> If I deploy multiple Prometheus Pods, and mount separate volumes to each
> Pod:
> 1. If one of the k8s nodes goes down, is there any chance the access is
> currently on the crashed nodes, then the query will be failed?
> 2. If multiple Pods are running in k8s cluster, is there any data
> inconsistence issue?(they scrape the same targets.)
>
> On Friday, June 4, 2021 at 1:40:05 AM UTC+8 juliu...@promlabs.com wrote:
>
>> Hi Nina,
>>
>> No, by default, the Prometheus Operator uses an emptyDir for the
>> Prometheus storage, which gets lost when the pod is rescheduled.
>>
>> This explains how to add persistent volumes:
>> https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/storage.md
>>
>> Regards,
>> Julius
>>
>> On Thu, Jun 3, 2021 at 9:08 AM nina guo <ninag...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> If using Prometheus Operator to install in k8s cluster, the data pv will
>>> be created automatically or not?
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-use...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-users/e7f3ea4f-b7ad-473d-9095-170529fd32f5n%40googlegroups.com
>>> <https://groups.google.com/d/msgid/prometheus-users/e7f3ea4f-b7ad-473d-9095-170529fd32f5n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>
>>
>> --
>> Julius Volz
>> PromLabs - promlabs.com
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/06279fe0-978b-4806-afc6-09b79ba6f6f7n%40googlegroups.com
> <https://groups.google.com/d/msgid/prometheus-users/06279fe0-978b-4806-afc6-09b79ba6f6f7n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>


-- 
Julius Volz
PromLabs - promlabs.com

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAObpH5xFXeAhyP1BJACqnwYU0qi89fs0%3D%2BM32oAm33EO59jVFA%40mail.gmail.com.

Reply via email to