Thank you.
Currently I deployed with separate yaml files without Operator or Helm. 
Probably it is not a good way?
And I also have a question that when the PODs are up, how to access 
Prometheus?
The following is the service yaml file, I tried to access with 
nodeIP:30000, but failed.

apiVersion: v1

kind: Service

metadata:

name: prometheus-service

spec:

selector:

app: prometheus-server

type: NodePort

ports:

- port: 8080

targetPort: 9090

nodePort: 30000

On Friday, April 2, 2021 at 5:15:16 PM UTC+8 sup...@gmail.com wrote:

> You typically only need 2 Prometheus pods for this to work.
>
> The node_exporter is a DaemonSet, it runs on all your nodes.
>
> The alertmanager should also run on 2-3 pods.
>
> This is all managed by the Prometheus Operator.
>
> I recommend reading the documentation for kube-prometheus to get started:
>
> https://github.com/prometheus-operator/kube-prometheus
>
> Or you can deploy it with Helm using the kube-prometheus-stack chart.
>
>
> https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
>
> On Fri, Apr 2, 2021 at 10:39 AM nina guo <ninag...@gmail.com> wrote:
>
>> Because I would like to realize high availability. If one is done, the 
>> other can take over.
>>
>> On Friday, April 2, 2021 at 4:35:50 PM UTC+8 sup...@gmail.com wrote:
>>
> Why do you think you need a Prometheus cluster? What problem are you 
>>> trying to solve?
>>>
>>> On Fri, Apr 2, 2021 at 10:31 AM nina guo <ninag...@gmail.com> wrote:
>>>
>>>> I'm new on Prometheus : )
>>>> Here comes the following question:
>>>>
>>>> - To deploy a Prometheus cluster, is it better to use Prometheus 
>>>> Operator? Currently I used sperate yaml files(prometheus, alertmanager, 
>>>> nodeexporter).
>>>> - There is a cluster with 3 nodes. 
>>>>    prometheus on 3 PODs separately
>>>>    node exporter on 3 PODs separately
>>>>     alertmanager on 1 POD
>>>>  Not very sure if the above solution can meet the purpose of high 
>>>> availability.
>>>> - Regarding the backend storage, which one is better, NFS or block 
>>>> storage?
>>>>
>>>> Many thanks for your help. 
>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Prometheus Users" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to prometheus-use...@googlegroups.com.
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/prometheus-users/077bc05d-43b0-4265-bf2e-60da10d67c12n%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/prometheus-users/077bc05d-43b0-4265-bf2e-60da10d67c12n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to prometheus-use...@googlegroups.com.
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-users/ace546ce-0cfd-45ef-80d0-1c0bf3021521n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/prometheus-users/ace546ce-0cfd-45ef-80d0-1c0bf3021521n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/c1dc5e6e-8b9c-4e3c-b756-a2289697e5f0n%40googlegroups.com.

Reply via email to