Re: [ClusterLabs] [EXTERNE] Re: Centreon HA Cluster - VIP issue

2023-09-15 Thread Adil BOUAZZAOUI
Hi Ken,

Any update please?

The idea is clear; I just need to know more information about this 2 clusters 
setup:

1. Arbitrator:
1.1. Only one arbitrator is needed for everything: should I use the Quorum 
provided by Centreon on the official documentation? Or should I use the booth 
ticket manager instead?
1.2. is fencing configured separately? Or is is configured during the booth 
ticket manager installation?

2. Floating IP:
2.1. it doesn't hurt if both Floating IPs are running at the same time right?

3. Fail over:
3.1. How to update the DNS to point to the appropriate IP?
3.2. we're running our own DNS servers; so How to configure booth ticket for 
just the DNS resource?

4. MariaDB replication:
4.1. How can Centreon MariaDB replicat between the 2 clusters?

5. Centreon:
5.1. Will this setup (2 clusters, 2 floating IPs, 1 booth manager) work for our 
Centreon project? 



Regards
Adil Bouazzaoui


Adil BOUAZZAOUI
Ingénieur Infrastructures & Technologies
GSM : +212 703 165 758
E-mail  : adil.bouazza...@tmandis.ma


-Message d'origine-
De : Adil BOUAZZAOUI 
Envoyé : Friday, September 8, 2023 5:15 PM
À : Ken Gaillot ; Adil Bouazzaoui 
Cc : Cluster Labs - All topics related to open-source clustering welcomed 

Objet : RE: [EXTERNE] Re: [ClusterLabs] Centreon HA Cluster - VIP issue

Hi Ken,

Thank you for the update and the clarification.
The idea is clear; I just need to know more information about this 2 clusters 
setup:

1. Arbitrator:
1.1. Only one arbitrator is needed for everything: should I use the Quorum 
provided by Centreon on the official documentation? Or should I use the booth 
ticket manager instead?
1.2. is fencing configured separately? Or is is configured during the booth 
ticket manager installation?

2. Floating IP:
2.1. it doesn't hurt if both Floating IPs are running at the same time right?

3. Fail over:
3.1. How to update the DNS to point to the appropriate IP?
3.2. we're running our own DNS servers; so How to configure booth ticket for 
just the DNS resource?

4. MariaDB replication:
4.1. How can Centreon MariaDB replicat between the 2 clusters?

5. Centreon:
5.1. Will this setup (2 clusters, 2 floating IPs, 1 booth manager) work for our 
Centreon project? 



Regards
Adil Bouazzaoui


Adil BOUAZZAOUI
Ingénieur Infrastructures & Technologies GSM : +212 703 165 758 E-mail  
: adil.bouazza...@tmandis.ma


-Message d'origine-
De : Ken Gaillot [mailto:kgail...@redhat.com] Envoyé : Tuesday, September 5, 
2023 10:00 PM À : Adil Bouazzaoui  Cc : Cluster Labs - All 
topics related to open-source clustering welcomed ; Adil 
BOUAZZAOUI  Objet : [EXTERNE] Re: [ClusterLabs] 
Centreon HA Cluster - VIP issue

On Tue, 2023-09-05 at 21:13 +0100, Adil Bouazzaoui wrote:
> Hi Ken,
> 
> thank you a big time for the feedback; much appreciated.
> 
> I suppose we go with a new Scenario 3: Setup 2 Clusters across 
> different DCs connected by booth; so could you please clarify below 
> points to me so i can understand better and start working on the
> architecture:
> 
> 1- in case of separate clusters connected by booth: should each 
> cluster have a quorum device for the Master/slave elections?

Hi,

Only one arbitrator is needed for everything.

Since each cluster in this case has two nodes, Corosync will use the "two_node" 
configuration to determine quorum. When first starting the cluster, both nodes 
must come up before quorum is obtained. After then, only one node is required 
to keep quorum -- which means that fencing is essential to prevent split-brain.

> 2- separate floating IPs at each cluster: please check the attached 
> diagram and let me know if this is exactly what you mean?

Yes, that looks good

> 3- To fail over, you update the DNS to point to the appropriate IP:
> can you suggest any guide to work on so we can have the DNS updated 
> automatically?

Unfortunately I don't know of any. If your DNS provider offers an API of some 
kind, you can write a resource agent that uses it. If you're running your own 
DNS servers, the agent has to update the zone files appropriately and reload.

Depending on what your services are, it might be sufficient to use a booth 
ticket for just the DNS resource, and let everything else stay running all the 
time. For example it doesn't hurt anything for both sites' floating IPs to stay 
up.

> Regards
> Adil Bouazzaoui
> 
> Le mar. 5 sept. 2023 à 16:48, Ken Gaillot  a 
> écrit :
> > Hi,
> > 
> > The scenario you describe is still a challenging one for HA.
> > 
> > A single cluster requires low latency and reliable communication. A 
> > cluster within a single data center or spanning data centers on the 
> > same campus can be reliable (and appears to be what Centreon has in 
> > mind), but it sounds like you're looking for geographical 
> > redundancy.
> > 
> > A single clu

[ClusterLabs] Centreon 2 Node HA cluster

2023-09-14 Thread Adil Bouazzaoui
Hi Jan, Any update please? Sent from my Huawei phone Original message From: Adil Bouazzaoui Date: Mon, Sep 4, 2023, 21:28To: users@clusterlabs.org, jfrie...@redhat.comCc: Adil BOUAZZAOUI Subject: Re: Users Digest, Vol 104, Issue 5Hi Jan,to add more information, we deployed Centreon 2 Node HA Cluster (Master in DC 1 & Slave in DC 2), quorum device which is responsible for split-brain is on DC 1 too, and the poller which is responsible for monitoring is i DC 1 too. The problem is that a VIP address is required (attached to Master node, in case of failover it will be moved to Slave) and we don't know what VIP we should use? also we don't know what is the perfect setup for our current scenario so if DC 1 goes down then the Slave on DC 2 will be the Master, that's why we don't know where to place the Quorum device and the poller?i hope to get some ideas so we can setup this cluster correctly.thanks in advance.Adil BouazzaouiIT Infrastructure engineeradil.bouazza...@tmandis.maadilb...@gmail.comLe lun. 4 sept. 2023 à 15:24, <users-requ...@clusterlabs.org> a écrit :Send Users mailing list submissions to
        users@clusterlabs.org

To subscribe or unsubscribe via the World Wide Web, visit
        https://lists.clusterlabs.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
        users-requ...@clusterlabs.org

You can reach the person managing the list at
        users-ow...@clusterlabs.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."


Today's Topics:

   1. Re: issue during Pacemaker failover testing (Klaus Wenninger)
   2. Re: issue during Pacemaker failover testing (Klaus Wenninger)
   3. Re: issue during Pacemaker failover testing (David Dolan)
   4. Re: Centreon HA Cluster - VIP issue (Jan Friesse)


--

Message: 1
Date: Mon, 4 Sep 2023 14:15:52 +0200
From: Klaus Wenninger <kwenn...@redhat.com>
To: Cluster Labs - All topics related to open-source clustering
        welcomed <users@clusterlabs.org>
Cc: David Dolan <daithido...@gmail.com>
Subject: Re: [ClusterLabs] issue during Pacemaker failover testing
Message-ID:
        wody...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Mon, Sep 4, 2023 at 1:44?PM Andrei Borzenkov <arvidj...@gmail.com> wrote:

> On Mon, Sep 4, 2023 at 2:25?PM Klaus Wenninger <kwenn...@redhat.com>
> wrote:
> >
> >
> > Or go for qdevice with LMS where I would expect it to be able to really
> go down to
> > a single node left - any of the 2 last ones - as there is still qdevice.#
> > Sry for the confusion btw.
> >
>
> According to documentation, "LMS is also incompatible with quorum
> devices, if last_man_standing is specified in corosync.conf then the
> quorum device will be disabled".
>

That is why I said qdevice with LMS - but it was probably not explicit
enough without telling that I meant the qdevice algorithm and not
the corosync flag.

Klaus

> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
-- next part --
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20230904/23e22260/attachment-0001.htm>

--

Message: 2
Date: Mon, 4 Sep 2023 14:32:39 +0200
From: Klaus Wenninger <kwenn...@redhat.com>
To: Cluster Labs - All topics related to open-source clustering
        welcomed <users@clusterlabs.org>
Cc: David Dolan <daithido...@gmail.com>
Subject: Re: [ClusterLabs] issue during Pacemaker failover testing
Message-ID:
        <CALrDAo0V8BXp4AjWCobKeAE6PimvGG2xME6iA+ohxshesx9...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Mon, Sep 4, 2023 at 1:50?PM Andrei Borzenkov <arvidj...@gmail.com> wrote:

> On Mon, Sep 4, 2023 at 2:18?PM Klaus Wenninger <kwenn...@redhat.com>
> wrote:
> >
> >
> >
> > On Mon, Sep 4, 2023 at 12:45?PM David Dolan <daithido...@gmail.com>
> wrote:
> >>
> >> Hi Klaus,
> >>
> >> With default quorum options I've performed the following on my 3 node
> cluster
> >>
> >> Bring down cluster services on one node - the running services migrate
> to another node
> >> Wait 3 minutes
> >> Bring down cluster services on one of the two remaining nodes - the
> surviving node in the cluster is then fenced
> >>
> >> Instead of the surviving node being fenced, I hoped that the services
> would mig

Re: [ClusterLabs] [EXTERNE] Re: Centreon HA Cluster - VIP issue

2023-09-09 Thread Adil BOUAZZAOUI
Hi Ken,

Thank you for the update and the clarification.
The idea is clear; I just need to know more information about this 2 clusters 
setup:

1. Arbitrator:
1.1. Only one arbitrator is needed for everything: should I use the Quorum 
provided by Centreon on the official documentation? Or should I use the booth 
ticket manager instead?
1.2. is fencing configured separately? Or is is configured during the booth 
ticket manager installation?

2. Floating IP:
2.1. it doesn't hurt if both Floating IPs are running at the same time right?

3. Fail over:
3.1. How to update the DNS to point to the appropriate IP?
3.2. we're running our own DNS servers; so How to configure booth ticket for 
just the DNS resource?

4. MariaDB replication:
4.1. How can Centreon MariaDB replicat between the 2 clusters?

5. Centreon:
5.1. Will this setup (2 clusters, 2 floating IPs, 1 booth manager) work for our 
Centreon project? 



Regards
Adil Bouazzaoui


Adil BOUAZZAOUI
Ingénieur Infrastructures & Technologies
GSM : +212 703 165 758
E-mail  : adil.bouazza...@tmandis.ma


-Message d'origine-
De : Ken Gaillot [mailto:kgail...@redhat.com] 
Envoyé : Tuesday, September 5, 2023 10:00 PM
À : Adil Bouazzaoui 
Cc : Cluster Labs - All topics related to open-source clustering welcomed 
; Adil BOUAZZAOUI 
Objet : [EXTERNE] Re: [ClusterLabs] Centreon HA Cluster - VIP issue

On Tue, 2023-09-05 at 21:13 +0100, Adil Bouazzaoui wrote:
> Hi Ken,
> 
> thank you a big time for the feedback; much appreciated.
> 
> I suppose we go with a new Scenario 3: Setup 2 Clusters across 
> different DCs connected by booth; so could you please clarify below 
> points to me so i can understand better and start working on the
> architecture:
> 
> 1- in case of separate clusters connected by booth: should each 
> cluster have a quorum device for the Master/slave elections?

Hi,

Only one arbitrator is needed for everything.

Since each cluster in this case has two nodes, Corosync will use the "two_node" 
configuration to determine quorum. When first starting the cluster, both nodes 
must come up before quorum is obtained. After then, only one node is required 
to keep quorum -- which means that fencing is essential to prevent split-brain.

> 2- separate floating IPs at each cluster: please check the attached 
> diagram and let me know if this is exactly what you mean?

Yes, that looks good

> 3- To fail over, you update the DNS to point to the appropriate IP:
> can you suggest any guide to work on so we can have the DNS updated 
> automatically?

Unfortunately I don't know of any. If your DNS provider offers an API of some 
kind, you can write a resource agent that uses it. If you're running your own 
DNS servers, the agent has to update the zone files appropriately and reload.

Depending on what your services are, it might be sufficient to use a booth 
ticket for just the DNS resource, and let everything else stay running all the 
time. For example it doesn't hurt anything for both sites' floating IPs to stay 
up.

> Regards
> Adil Bouazzaoui
> 
> Le mar. 5 sept. 2023 à 16:48, Ken Gaillot  a 
> écrit :
> > Hi,
> > 
> > The scenario you describe is still a challenging one for HA.
> > 
> > A single cluster requires low latency and reliable communication. A 
> > cluster within a single data center or spanning data centers on the 
> > same campus can be reliable (and appears to be what Centreon has in 
> > mind), but it sounds like you're looking for geographical 
> > redundancy.
> > 
> > A single cluster isn't appropriate for that. Instead, separate 
> > clusters connected by booth would be preferable. Each cluster would 
> > have its own nodes and fencing. Booth tickets would control which 
> > cluster could run resources.
> > 
> > Whatever design you use, it is pointless to put a quorum tie- 
> > breaker at one of the data centers. If that data center becomes 
> > unreachable, the other one can't recover resources. The tie-breaker 
> > (qdevice for a single cluster or a booth arbitrator for multiple 
> > clusters) can be very lightweight, so it can run in a public cloud 
> > for example, if a third site is not available.
> > 
> > The IP issue is separate. For that, you will need separate floating 
> > IPs at each cluster, on that cluster's network. To fail over, you 
> > update the DNS to point to the appropriate IP. That is a tricky 
> > problem without a universal automated solution. Some people update 
> > the DNS manually after being alerted of a failover. You could write 
> > a custom resource agent to update the DNS automatically. Either way 
> > you'll need low TTLs on the relevant records.
> > 
> > On Sun, 2023-09-03 at 11:59 +, Adil BOUAZZAOUI wrote:
> > > Hello,
>

Re: [ClusterLabs] [EXTERNE] Re: Users Digest, Vol 104, Issue 5

2023-09-08 Thread Adil BOUAZZAOUI
Hi Jan,

Any update  please ?


Regards
Adil Bouazzaoui

[cid:image003.png@01D8A2AB.F7B7B9F0]
Adil BOUAZZAOUI
Ingénieur Infrastructures & Technologies
GSM : +212 703 165 758
E-mail  : adil.bouazza...@tmandis.ma<mailto:adil.bouazza...@tmandis.ma>


De : Adil BOUAZZAOUI
Envoyé : Tuesday, September 5, 2023 9:03 AM
À : Klaus Wenninger ; Cluster Labs - All topics related to 
open-source clustering welcomed 
Cc : jfrie...@redhat.com
Objet : RE: [EXTERNE] Re: [ClusterLabs] Users Digest, Vol 104, Issue 5

Hi Jan,

This is the correct reply:

to add more information, we deployed Centreon 2 Node HA Cluster (Master in DC 1 
& Slave in DC 2), quorum device which is responsible for split-brain is on DC 1 
too, and the poller which is responsible for monitoring is i DC 1 too. The 
problem is that a VIP address is required (attached to Master node, in case of 
failover it will be moved to Slave) and we don't know what VIP we should use? 
also we don't know what is the perfect setup for our current scenario so if DC 
1 goes down then the Slave on DC 2 will be the Master, that's why we don't know 
where to place the Quorum device and the poller?

i hope to get some ideas so we can setup this cluster correctly.
thanks in advance.



Regards
Adil Bouazzaoui

[cid:image003.png@01D8A2AB.F7B7B9F0]
Adil BOUAZZAOUI
Ingénieur Infrastructures & Technologies
GSM : +212 703 165 758
E-mail  : adil.bouazza...@tmandis.ma<mailto:adil.bouazza...@tmandis.ma>


De : Klaus Wenninger [mailto:kwenn...@redhat.com]
Envoyé : Tuesday, September 5, 2023 7:28 AM
À : Cluster Labs - All topics related to open-source clustering welcomed 
mailto:users@clusterlabs.org>>
Cc : jfrie...@redhat.com<mailto:jfrie...@redhat.com>; Adil BOUAZZAOUI 
mailto:adil.bouazza...@tmandis.ma>>
Objet : [EXTERNE] Re: [ClusterLabs] Users Digest, Vol 104, Issue 5

Down below you replied to 2 threads. I think the latter is the one you intended 
to ... very confusing ...
Sry for adding more spam - was hesitant - but I think there is a chance it 
removes some confusion ...

Klaus

On Mon, Sep 4, 2023 at 10:29 PM Adil Bouazzaoui 
mailto:adilb...@gmail.com>> wrote:
Hi Jan,

to add more information, we deployed Centreon 2 Node HA Cluster (Master in DC 1 
& Slave in DC 2), quorum device which is responsible for split-brain is on DC 1 
too, and the poller which is responsible for monitoring is i DC 1 too. The 
problem is that a VIP address is required (attached to Master node, in case of 
failover it will be moved to Slave) and we don't know what VIP we should use? 
also we don't know what is the perfect setup for our current scenario so if DC 
1 goes down then the Slave on DC 2 will be the Master, that's why we don't know 
where to place the Quorum device and the poller?

i hope to get some ideas so we can setup this cluster correctly.
thanks in advance.

Adil Bouazzaoui
IT Infrastructure engineer
adil.bouazza...@tmandis.ma<mailto:adil.bouazza...@tmandis.ma>
adilb...@gmail.com<mailto:adilb...@gmail.com>

Le lun. 4 sept. 2023 à 15:24, 
mailto:users-requ...@clusterlabs.org>> a écrit :
Send Users mailing list submissions to
users@clusterlabs.org<mailto:users@clusterlabs.org>

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.clusterlabs.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-requ...@clusterlabs.org<mailto:users-requ...@clusterlabs.org>

You can reach the person managing the list at
users-ow...@clusterlabs.org<mailto:users-ow...@clusterlabs.org>

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."


Today's Topics:

   1. Re: issue during Pacemaker failover testing (Klaus Wenninger)
   2. Re: issue during Pacemaker failover testing (Klaus Wenninger)
   3. Re: issue during Pacemaker failover testing (David Dolan)
   4. Re: Centreon HA Cluster - VIP issue (Jan Friesse)


--

Message: 1
Date: Mon, 4 Sep 2023 14:15:52 +0200
From: Klaus Wenninger mailto:kwenn...@redhat.com>>
To: Cluster Labs - All topics related to open-source clustering
welcomed mailto:users@clusterlabs.org>>
Cc: David Dolan mailto:daithido...@gmail.com>>
Subject: Re: [ClusterLabs] issue during Pacemaker failover testing
Message-ID:

mailto:wody...@mail.gmail.com>>
Content-Type: text/plain; charset="utf-8"

On Mon, Sep 4, 2023 at 1:44?PM Andrei Borzenkov 
mailto:arvidj...@gmail.com>> wrote:

> On Mon, Sep 4, 2023 at 2:25?PM Klaus Wenninger 
> mailto:kwenn...@redhat.com>>
> wrote:
> >
> >
> > Or go for qdevice with LMS where I would expect it to be able to really
> go down to
> > a single node left - any of the 2 last ones - as there is still qdevice.#
> > Sry for the confus

Re: [ClusterLabs] Centreon HA Cluster - VIP issue

2023-09-05 Thread Adil Bouazzaoui
Hi Ken,

thank you a big time for the feedback; much appreciated.

I suppose we go with a new *Scenario 3*: Setup 2 Clusters across different
DCs connected by booth; so could you please clarify below points to me so i
can understand better and start working on the architecture:

1- in case of separate clusters connected by booth: should each cluster
have a quorum device for the Master/slave elections?
2- separate floating IPs at each cluster: please check the attached diagram
and let me know if this is exactly what you mean?
3- To fail over, you update the DNS to point to the appropriate IP: can you
suggest any guide to work on so we can have the DNS updated automatically?

Regards
Adil Bouazzaoui

Le mar. 5 sept. 2023 à 16:48, Ken Gaillot  a écrit :

> Hi,
>
> The scenario you describe is still a challenging one for HA.
>
> A single cluster requires low latency and reliable communication. A
> cluster within a single data center or spanning data centers on the
> same campus can be reliable (and appears to be what Centreon has in
> mind), but it sounds like you're looking for geographical redundancy.
>
> A single cluster isn't appropriate for that. Instead, separate clusters
> connected by booth would be preferable. Each cluster would have its own
> nodes and fencing. Booth tickets would control which cluster could run
> resources.
>
> Whatever design you use, it is pointless to put a quorum tie-breaker at
> one of the data centers. If that data center becomes unreachable, the
> other one can't recover resources. The tie-breaker (qdevice for a
> single cluster or a booth arbitrator for multiple clusters) can be very
> lightweight, so it can run in a public cloud for example, if a third
> site is not available.
>
> The IP issue is separate. For that, you will need separate floating IPs
> at each cluster, on that cluster's network. To fail over, you update
> the DNS to point to the appropriate IP. That is a tricky problem
> without a universal automated solution. Some people update the DNS
> manually after being alerted of a failover. You could write a custom
> resource agent to update the DNS automatically. Either way you'll need
> low TTLs on the relevant records.
>
> On Sun, 2023-09-03 at 11:59 +, Adil BOUAZZAOUI wrote:
> > Hello,
> >
> > My name is Adil, I’m working for Tman company, we are testing the
> > Centreon HA cluster to monitor our infrastructure for 13 companies,
> > for now we are using the 100 IT license to test the platform, once
> > everything is working fine then we can purchase a license suitable
> > for our case.
> >
> > We're stuck at scenario 2: setting up Centreon HA Cluster with Master
> > & Slave on a different datacenters.
> > For scenario 1: setting up the Cluster with Master & Slave and VIP
> > address on the same network (VLAN) it is working fine.
> >
> > Scenario 1: Cluster on Same network (same DC) ==> works fine
> > Master in DC 1 VLAN 1: 172.30.9.230 /24
> > Slave in DC 1 VLAN 1: 172.30.9.231 /24
> > VIP in DC 1 VLAN 1: 172.30.9.240/24
> > Quorum in DC 1 LAN: 192.168.253.230/24
> > Poller in DC 1 LAN: 192.168.253.231/24
> >
> > Scenario 2: Cluster on different networks (2 separate DCs connected
> > with VPN) ==> still not working
> > Master in DC 1 VLAN 1: 172.30.9.230 /24
> > Slave in DC 2 VLAN 2: 172.30.10.230 /24
> > VIP: example 102.84.30.XXX. We used a public static IP from our
> > internet service provider, we thought that using a IP from a site
> > network won't work, if the site goes down then the VIP won't be
> > reachable!
> > Quorum: 192.168.253.230/24
> > Poller: 192.168.253.231/24
> >
> >
> > Our goal is to have Master & Slave nodes on different sites, so when
> > Site A goes down, we keep monitoring with the slave.
> > The problem is that we don't know how to set up the VIP address? Nor
> > what kind of VIP address will work? or how can the VIP address work
> > in this scenario? or is there anything else that can replace the VIP
> > address to make things work.
> > Also, can we use a backup poller? so if the poller 1 on Site A goes
> > down, then the poller 2 on Site B can take the lead?
> >
> > we looked everywhere (The watch, youtube, Reddit, Github...), and we
> > still couldn't get a workaround!
> >
> > the guide we used to deploy the 2 Nodes Cluster:
> >
> https://docs.centreon.com/docs/installation/installation-of-centreon-ha/overview/
> >
> > attached the 2 DCs architecture example, and also most of the
> > required screenshots/config.
> >
> >
> > We appreciate your support.
> > Thank you in advance.
> >

Re: [ClusterLabs] [EXTERNE] Re: Users Digest, Vol 104, Issue 5

2023-09-05 Thread Adil BOUAZZAOUI
Hi Jan,

This is the correct reply:

to add more information, we deployed Centreon 2 Node HA Cluster (Master in DC 1 
& Slave in DC 2), quorum device which is responsible for split-brain is on DC 1 
too, and the poller which is responsible for monitoring is i DC 1 too. The 
problem is that a VIP address is required (attached to Master node, in case of 
failover it will be moved to Slave) and we don't know what VIP we should use? 
also we don't know what is the perfect setup for our current scenario so if DC 
1 goes down then the Slave on DC 2 will be the Master, that's why we don't know 
where to place the Quorum device and the poller?

i hope to get some ideas so we can setup this cluster correctly.
thanks in advance.



Regards
Adil Bouazzaoui

[cid:image003.png@01D8A2AB.F7B7B9F0]
Adil BOUAZZAOUI
Ingénieur Infrastructures & Technologies
GSM : +212 703 165 758
E-mail  : adil.bouazza...@tmandis.ma<mailto:adil.bouazza...@tmandis.ma>


De : Klaus Wenninger [mailto:kwenn...@redhat.com]
Envoyé : Tuesday, September 5, 2023 7:28 AM
À : Cluster Labs - All topics related to open-source clustering welcomed 

Cc : jfrie...@redhat.com; Adil BOUAZZAOUI 
Objet : [EXTERNE] Re: [ClusterLabs] Users Digest, Vol 104, Issue 5

Down below you replied to 2 threads. I think the latter is the one you intended 
to ... very confusing ...
Sry for adding more spam - was hesitant - but I think there is a chance it 
removes some confusion ...

Klaus

On Mon, Sep 4, 2023 at 10:29 PM Adil Bouazzaoui 
mailto:adilb...@gmail.com>> wrote:
Hi Jan,

to add more information, we deployed Centreon 2 Node HA Cluster (Master in DC 1 
& Slave in DC 2), quorum device which is responsible for split-brain is on DC 1 
too, and the poller which is responsible for monitoring is i DC 1 too. The 
problem is that a VIP address is required (attached to Master node, in case of 
failover it will be moved to Slave) and we don't know what VIP we should use? 
also we don't know what is the perfect setup for our current scenario so if DC 
1 goes down then the Slave on DC 2 will be the Master, that's why we don't know 
where to place the Quorum device and the poller?

i hope to get some ideas so we can setup this cluster correctly.
thanks in advance.

Adil Bouazzaoui
IT Infrastructure engineer
adil.bouazza...@tmandis.ma<mailto:adil.bouazza...@tmandis.ma>
adilb...@gmail.com<mailto:adilb...@gmail.com>

Le lun. 4 sept. 2023 à 15:24, 
mailto:users-requ...@clusterlabs.org>> a écrit :
Send Users mailing list submissions to
users@clusterlabs.org<mailto:users@clusterlabs.org>

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.clusterlabs.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-requ...@clusterlabs.org<mailto:users-requ...@clusterlabs.org>

You can reach the person managing the list at
users-ow...@clusterlabs.org<mailto:users-ow...@clusterlabs.org>

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."


Today's Topics:

   1. Re: issue during Pacemaker failover testing (Klaus Wenninger)
   2. Re: issue during Pacemaker failover testing (Klaus Wenninger)
   3. Re: issue during Pacemaker failover testing (David Dolan)
   4. Re: Centreon HA Cluster - VIP issue (Jan Friesse)


--

Message: 1
Date: Mon, 4 Sep 2023 14:15:52 +0200
From: Klaus Wenninger mailto:kwenn...@redhat.com>>
To: Cluster Labs - All topics related to open-source clustering
welcomed mailto:users@clusterlabs.org>>
Cc: David Dolan mailto:daithido...@gmail.com>>
Subject: Re: [ClusterLabs] issue during Pacemaker failover testing
Message-ID:

mailto:wody...@mail.gmail.com>>
Content-Type: text/plain; charset="utf-8"

On Mon, Sep 4, 2023 at 1:44?PM Andrei Borzenkov 
mailto:arvidj...@gmail.com>> wrote:

> On Mon, Sep 4, 2023 at 2:25?PM Klaus Wenninger 
> mailto:kwenn...@redhat.com>>
> wrote:
> >
> >
> > Or go for qdevice with LMS where I would expect it to be able to really
> go down to
> > a single node left - any of the 2 last ones - as there is still qdevice.#
> > Sry for the confusion btw.
> >
>
> According to documentation, "LMS is also incompatible with quorum
> devices, if last_man_standing is specified in corosync.conf then the
> quorum device will be disabled".
>

That is why I said qdevice with LMS - but it was probably not explicit
enough without telling that I meant the qdevice algorithm and not
the corosync flag.

Klaus

> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
-- next part --
An HTM

Re: [ClusterLabs] Users Digest, Vol 104, Issue 5

2023-09-04 Thread Adil Bouazzaoui
Hi Jan,

to add more information, we deployed Centreon 2 Node HA Cluster (Master in
DC 1 & Slave in DC 2), quorum device which is responsible for split-brain
is on DC 1 too, and the poller which is responsible for monitoring is i DC
1 too. The problem is that a VIP address is required (attached to Master
node, in case of failover it will be moved to Slave) and we don't know what
VIP we should use? also we don't know what is the perfect setup for our
current scenario so if DC 1 goes down then the Slave on DC 2 will be the
Master, that's why we don't know where to place the Quorum device and the
poller?

i hope to get some ideas so we can setup this cluster correctly.
thanks in advance.

Adil Bouazzaoui
IT Infrastructure engineer
adil.bouazza...@tmandis.ma
adilb...@gmail.com

Le lun. 4 sept. 2023 à 15:24,  a écrit :

> Send Users mailing list submissions to
> users@clusterlabs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.clusterlabs.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-requ...@clusterlabs.org
>
> You can reach the person managing the list at
> users-ow...@clusterlabs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
>1. Re: issue during Pacemaker failover testing (Klaus Wenninger)
>2. Re: issue during Pacemaker failover testing (Klaus Wenninger)
>3. Re: issue during Pacemaker failover testing (David Dolan)
>4. Re: Centreon HA Cluster - VIP issue (Jan Friesse)
>
>
> --
>
> Message: 1
> Date: Mon, 4 Sep 2023 14:15:52 +0200
> From: Klaus Wenninger 
> To: Cluster Labs - All topics related to open-source clustering
> welcomed 
> Cc: David Dolan 
> Subject: Re: [ClusterLabs] issue during Pacemaker failover testing
> Message-ID:
>  wody...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Mon, Sep 4, 2023 at 1:44?PM Andrei Borzenkov 
> wrote:
>
> > On Mon, Sep 4, 2023 at 2:25?PM Klaus Wenninger 
> > wrote:
> > >
> > >
> > > Or go for qdevice with LMS where I would expect it to be able to really
> > go down to
> > > a single node left - any of the 2 last ones - as there is still
> qdevice.#
> > > Sry for the confusion btw.
> > >
> >
> > According to documentation, "LMS is also incompatible with quorum
> > devices, if last_man_standing is specified in corosync.conf then the
> > quorum device will be disabled".
> >
>
> That is why I said qdevice with LMS - but it was probably not explicit
> enough without telling that I meant the qdevice algorithm and not
> the corosync flag.
>
> Klaus
>
> > ___
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
> >
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://lists.clusterlabs.org/pipermail/users/attachments/20230904/23e22260/attachment-0001.htm
> >
>
> --
>
> Message: 2
> Date: Mon, 4 Sep 2023 14:32:39 +0200
> From: Klaus Wenninger 
> To: Cluster Labs - All topics related to open-source clustering
> welcomed 
> Cc: David Dolan 
> Subject: Re: [ClusterLabs] issue during Pacemaker failover testing
> Message-ID:
> <
> calrdao0v8bxp4ajwcobkeae6pimvgg2xme6ia+ohxshesx9...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Mon, Sep 4, 2023 at 1:50?PM Andrei Borzenkov 
> wrote:
>
> > On Mon, Sep 4, 2023 at 2:18?PM Klaus Wenninger 
> > wrote:
> > >
> > >
> > >
> > > On Mon, Sep 4, 2023 at 12:45?PM David Dolan 
> > wrote:
> > >>
> > >> Hi Klaus,
> > >>
> > >> With default quorum options I've performed the following on my 3 node
> > cluster
> > >>
> > >> Bring down cluster services on one node - the running services migrate
> > to another node
> > >> Wait 3 minutes
> > >> Bring down cluster services on one of the two remaining nodes - the
> > surviving node in the cluster is then fenced
> > >>
> > >> Instead of the surviving node being fenced, I hoped that the services
> > would migrate and run on that remaining node.
> > >>
> > >> Just looking for confirmation that my understanding is ok and if I'm

[ClusterLabs] Centreon HA Cluster - VIP issue

2023-09-02 Thread Adil Bouazzaoui
 Hello,

My name is Adil,i worked for Tman company, we are testing the Centreon HA
cluster to monitor our infrastructure for 13 companies, for now we are
using the 100 IT licence to test the platform, once everything is working
fine then we can purchase a licence suitable for our case.

We're stuck at *scenario 2*: setting up Centreon HA Cluster with Master &
Slave on a different datacenters.
For *scenario 1*: setting up the Cluster with Master & Slave and VIP
address on the same network (VLAN) it is working fine.

*Scenario 1: Cluster on Same network (same DC) ==> works fine*
Master in DC 1 VLAN 1: 172.30.15.10 /24
Slave in DC 1 VLAN 1: 172.30.15.20 /24
VIP in DC 1 VLAN 1: 172.30.15.30/24
Quorum in DC 1 LAN: 192.168.1.10/24
Poller in DC 1 LAN: 192.168.1.20/24

*Scenario 2: Cluster on different networks (2 separate DCs connected with
VPN) ==> still not working*
Master in DC 1 VLAN 1: 172.30.15.10 /24
Slave in DC 2 VLAN 2: 172.30.50.10 /24
VIP: example 102.84.30.XXX. We used a public static IP from our internet
service provider, we thought that using a IP from a site network won't
work, if the site goes down then the VIP won't be reachable!
Quorum: 192.168.1.10/24
Poller: 192.168.1.20/24

Our *goal *is to have Master & Slave nodes on different sites, so when Site
A goes down, we keep monitoring with the slave.
The problem is that we don't know how to set up the VIP address? Nor what
kind of VIP address will work? or how can the VIP address work in this
scenario? or is there anything else that can replace the VIP address to
make things work.
Also, can we use a backup poller? so if the poller 1 on Site A goes down,
then the poller 2 on Site B can take the lead?

we looked everywhere (The watch, youtube, Reddit, Github...), and we still
couldn't get a workaround!

the guide we used to deploy the 2 Nodes Cluster:
https://docs.centreon.com/docs/installation/installation-of-centreon-ha/overview/

attached the 2 DCs architecture example.

We appreciate your support.
Thank you in advance.


Adil Bouazzaoui
IT Infrastructure Engineer
TMAN
adil.bouazza...@tmandis.ma
adilb...@gmail.com
+212 656 29 2020
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Asking about Clusterlabs cross DC cluster

2023-08-21 Thread Adil Bouazzaoui
 Hello,

my name is adil, i got your email from Clusterlabs site.
i'm wondering if i can setup a cluster with Corosync/pacemaker for a cross
DC cluster?

for example:
Node 1 (Master) in VLAN 1: 172.30.100.10 /24
Node 2 (slave) in VLAN 2: 172.30.200.10 /24

Note: i deployed Centeron HA Cluster with Corosync/Pacemaker on same VLAN
and it's working fine.
my idea is to move Slave node on another site (VLAN 2).

Thank you in advance


-- 


*Adil Bouazzaoui*
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/