Re: [ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-10 Thread Andreas Janning
Hi All, I have just tried assigning equal location scores to both nodes and it does indeed fix my problem. I think Andrei Borzenkovs explanation is spot on. That is what is happening in the cluster. I think when initially setting up the cluster, I used different scores to define a "main" and

Re: [ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-09 Thread Andrei Borzenkov
On 09.08.2021 22:57, Reid Wahl wrote: > On Mon, Aug 9, 2021 at 6:19 AM Andrei Borzenkov wrote: > >> On 09.08.2021 16:00, Andreas Janning wrote: >>> Hi, >>> >>> yes, by "service" I meant the apache-clone resource. >>> >>> Maybe I can give a more stripped down and detailed example: >>> >>> *Given

Re: [ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-09 Thread Reid Wahl
On Mon, Aug 9, 2021 at 6:19 AM Andrei Borzenkov wrote: > On 09.08.2021 16:00, Andreas Janning wrote: > > Hi, > > > > yes, by "service" I meant the apache-clone resource. > > > > Maybe I can give a more stripped down and detailed example: > > > > *Given the following configuration:* > >

Re: [ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-09 Thread Strahil Nikolov via Users
> name="statusurl" value="http://localhost/server-status"/> Can you show the apache config for the status page ? It must be accessible only from localhost (127.0.0.1) and should not be reachable from the other nodes. Best Regards, Strahil Nikolov ___

Re: [ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-09 Thread Andrei Borzenkov
On 09.08.2021 16:00, Andreas Janning wrote: > Hi, > > yes, by "service" I meant the apache-clone resource. > > Maybe I can give a more stripped down and detailed example: > > *Given the following configuration:* > [root@pacemaker-test-1 cluster]# pcs cluster cib --config > > > >

Re: [ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-09 Thread Andreas Janning
Hi, yes, by "service" I meant the apache-clone resource. Maybe I can give a more stripped down and detailed example: *Given the following configuration:* [root@pacemaker-test-1 cluster]# pcs cluster cib --config

Re: [ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-09 Thread Andrei Borzenkov
On Mon, Aug 9, 2021 at 3:07 PM Andreas Janning wrote: > > Hi, > > I have just tried your suggestion by adding > name="interleave" value="true"/> > to the clone configuration. > Unfortunately, the behavior stays the same. The service is still restarted on > the passive node when

Re: [ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-09 Thread Andreas Janning
Hi, I have just tried your suggestion by adding to the clone configuration. Unfortunately, the behavior stays the same. The service is still restarted on the passive node when crashing it on the active node. Regards Andreas Am Mo., 9. Aug. 2021 um 13:45 Uhr schrieb Vladislav

Re: [ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-09 Thread Vladislav Bogdanov
Hi. I'd suggest to set your clone meta attribute 'interleaved' to 'true' Best, Vladislav On August 9, 2021 1:43:16 PM Andreas Janning wrote: Hi all, we recently experienced an outage in our pacemaker cluster and I would like to understand how we can configure the cluster to avoid this

Re: [ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-09 Thread Strahil Nikolov via Users
I've setup something similar with VIP that is everywhere using the globally-unique=true (where cluster controls which node to be passive and which active). This allows that the VIP is everywhere but only 1 node answers the requests , while the WEB server was running everywhere with config and

[ClusterLabs] Cloned ressource is restarted on all nodes if one node fails

2021-08-09 Thread Andreas Janning
Hi all, we recently experienced an outage in our pacemaker cluster and I would like to understand how we can configure the cluster to avoid this problem in the future. First our basic setup: - CentOS7 - Pacemaker 1.1.23 - Corosync 2.4.5 - Resource-Agents 4.1.1 Our cluster is composed of