Re: [ClusterLabs] -INFINITY location constraint not honored?

2019-10-18 Thread Andrei Borzenkov
18.10.2019 12:43, Raffaele Pantaleoni пишет:
> 
> Il 18/10/2019 10:21, Andrei Borzenkov ha scritto:
>> According to it, you have symmetric cluster (and apparently made typo
>> trying to change it)
>>
>>  > name="symmetric-cluster" value="true"/>
>>  > name="symmectric-cluster" value="true"/>
>>
> That's correct. I tried both approaches, opt in explicitly allowing
> nodes to be bound to resources. Then I tried the opt out strategy
> denying resource to be bound to some nodes. Both approaches failed
> though.

According to CIB you provided all nodes except one -  SRVDRSW01 - are in
standby. IOW there is really no choice where to run any resource.

So pacemaker behavior looks at least logical - it attempts to make sure
resources are started. After all, this is its primary goal.

Whether this is intentional I do not know. As already mentioned in this
discussion, the whole ordering and (co-)location can certainly benefit
from better documentation.

> My fault? It could be, obviously. But is simply follwed the
> directions found in Clusters from scratch and Configuration Explained.
> The main issue is: pcs constraint shows an apparently correct
> constrainted configuration, crm_simulate doesn't agree on such
> constraints on the other hand. :D
>> On Fri, Oct 18, 2019 at 10:29 AM Raffaele Pantaleoni
>>  wrote:
>> >
>> > Il 17/10/2019 18:08, Ken Gaillot ha scritto:
>> >> This does sound odd, possibly a bug. Can you provide the output of
>> "pcs
>> >> cluster cib" when one of the unexpected results is happening? (Strip
>> >> out any passwords or other sensitive information, and you can
>> e-mail it
>> >> to me privately if you don't want it on the list.)
>> >>
>> > There's no problem at all in sharing the informations. I'm working on a
>> > sandbox to test Pacemaker and then use it on a production set to
>> achieve
>> > high availability and fault tolerance, so I'm working on dummy machines
>> > inside our internal network.
>> > (meanwhile I added three nodes more, but the behaviour has not changed)
>> >
>> > Thanks
>> >> On Thu, 2019-10-17 at 16:55 +0200, Raffaele Pantaleoni wrote:
>> >>> Il 17/10/2019 16:47, Raffaele Pantaleoni ha scritto:
>>  Il 17/10/2019 14:51, Jan Pokorný ha scritto:
>> > On 17/10/19 08:22 +0200, Raffaele Pantaleoni wrote:
>> >> I'm rather new to Pacemaker, I'm performing early tests on a
>> >> set of
>> >> three virtual machines.
>> >>
>> >> I am configuring the cluster in the following way:
>> >>
>> >> 3 nodes configured
>> >> 4 resources configured
>> >>
>> >> Online: [ SRVDRSW01 SRVDRSW02 SRVDRSW03 ]
>> >>
>> >> ClusterIP  (ocf::heartbeat:IPaddr2):   Started
>> >> SRVDRSW01
>> >> CouchIP    (ocf::heartbeat:IPaddr2):   Started
>> >> SRVDRSW03
>> >> FrontEnd   (ocf::heartbeat:nginx): Started SRVDRSW01
>> >> ITATESTSERVER-DIP  (ocf::nodejs:pm2):  Started
>> >> SRVDRSW01
>> >>
>> >> with the following constraints:
>> >>
>> >> Location Constraints:
>> >>  Resource: ClusterIP
>> >>    Enabled on: SRVRDSW01 (score:200)
>> >>    Enabled on: SRVRDSW02 (score:100)
>> >>  Resource: CouchIP
>> >>    Enabled on: SRVRDSW02 (score:10)
>> >>    Enabled on: SRVRDSW03 (score:100)
>> >>    Disabled on: SRVRDSW01 (score:-INFINITY)
>> >>  Resource: FrontEnd
>> >>    Enabled on: SRVRDSW01 (score:200)
>> >>    Enabled on: SRVRDSW02 (score:100)
>> >>  Resource: ITATESTSERVER-DIP
>> >>    Enabled on: SRVRDSW01 (score:200)
>> >>    Enabled on: SRVRDSW02 (score:100)
>> >> Ordering Constraints:
>> >>  start ClusterIP then start FrontEnd (kind:Mandatory)
>> >>  start ClusterIP then start ITATESTSERVER-DIP
>> >> (kind:Mandatory)
>> >> Colocation Constraints:
>> >>  FrontEnd with ClusterIP (score:INFINITY)
>> >>  FrontEnd with ITATESTSERVER-DIP (score:INFINITY)
>> >>
>> >> I've configured the cluster with an opt in strategy using the
>> >> following
>> >> commands:
>> >>
>> >> crm_attribute --name symmectric-cluster --update false
>> >>
>> >> pcs resource create ClusterIP ocf:heartbeat:IPaddr2
>> >> ip=172.16.10.126
>> >> cidr_netmask=16 op monitor interval=30s
>> >> pcs resource create CouchIP ocf:heartbeat:IPaddr2
>> >> ip=172.16.10.128
>> >> cidr_netmask=16 op monitor interval=30s
>> >> pcs resource create FrontEnd ocf:heartbeat:nginx
>> >> configfile=/etc/nginx/nginx.conf
>> >> pcs resource create ITATESTSERVER-DIP ocf:nodejs:pm2
>> >> user=ITATESTSERVER
>> >> --force
>> >>
>> >> pcs constraint colocation add FrontEnd with ClusterIP INFINITY
>> >> pcs constraint colocation add FrontEnd with ITATESTSERVER-DIP
>> >> INFINITY
>> >>
>> >> pcs constraint order ClusterIP then FrontEnd
>> >> pcs constraint order ClusterIP 

[ClusterLabs] Pacemaker 2.0.3-rc1 now available

2019-10-18 Thread Ken Gaillot
Hi all,

I am happy to announce that source code for the first release candidate
for Pacemaker version 2.0.3 is now available at:

https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.3-rc1

Highlights previously discussed on this list include a dynamic cluster
recheck interval (you don't have to care about cluster-recheck-interval 
for failure-timeout or most rules now), new Pacemaker Remote options
for security hardening (listen address and TLS priorities), and Year
2038 compatibility.

Also, crm_mon now supports the new --output-as/--output-to options, and
has some tweaks to the text and HTML output that will hopefully make it
easier to read.

A couple of changes that haven't been mentioned yet:

* A new fence-reaction cluster option controls whether the local node
will stop pacemaker or panic the local host if notified of its own
fencing. This generally happens with fabric fencing (e.g. fence_scsi)
when the host and networking are still functional. The default, "stop",
is the previous behavior. The new option of "panic" makes more sense
for a node that's been fenced, so it may become the default in a future
release, but we are not doing so at this time for backward
compatibility. Therefore, if you prefer the "stop" behavior (for
example, to avoid losing logs when fenced), it is recommended to
specify it explicitly.

* We discovered that the ocf:pacemaker:pingd agent, a legacy alias for
ocf:pacemaker:ping, has actually been broken since 1.1.3 (!). Rather
than fix it, we are formally deprecating it, and will remove it in a
future release.

As usual, there were many bug fixes and log message improvements as
well. For more details about changes in this release, please see the
change log:

https://github.com/ClusterLabs/pacemaker/blob/2.0/ChangeLog

Everyone is encouraged to download, compile and test the new release.
We do many regression tests and simulations, but we can't cover all
possible use cases, so your feedback is important and appreciated.

Many thanks to all contributors of source code to this release,
including Aleksei Burlakov, Chris Lumens, Gao,Yan, Hideo Yamauchi, Jan
Pokorný, John Eckersberg, Kazunori INOUE, Ken Gaillot, Klaus Wenninger,
Konstantin Kharlamov, Munenari, Roger Zhou, S. Schuberth, Tomas
Jelinek, and Yuusuke Iida.

1.1.22-rc1, with selected backports from this release, will also be
released soon.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] -INFINITY location constraint not honored?

2019-10-18 Thread Jan Pokorný
On 18/10/19 17:59 +0200, Jan Pokorný wrote:
> On 18/10/19 11:21 +0300, Andrei Borzenkov wrote:
>> According to it, you have symmetric cluster (and apparently made
>> typo trying to change it)
>> 
>>  
>> name="symmetric-cluster" value="true"/>
>>  
>> name="symmectric-cluster" value="true"/>
> 
> Great spot, demonstrating how multi-faceted any question/case can
> become.  We shall rather make something about it, this is _not_
> sustainable.

(e.g. make a comment about it so as to do something about it...
happy Friday :-)

-- 
Jan (Poki)


pgpZsCLKKs1kt.pgp
Description: PGP signature
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] -INFINITY location constraint not honored?

2019-10-18 Thread Jan Pokorný
On 18/10/19 11:21 +0300, Andrei Borzenkov wrote:
> According to it, you have symmetric cluster (and apparently made
> typo trying to change it)
> 
>  
> name="symmetric-cluster" value="true"/>
>  
> name="symmectric-cluster" value="true"/>

Great spot, demonstrating how multi-faceted any question/case can
become.  We shall rather make something about it, this is _not_
sustainable.

-- 
Jan (Poki)


pgpwr9nIRCjp5.pgp
Description: PGP signature
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Apache doesn't start under corosync with systemd

2019-10-18 Thread Reynolds, John F - San Mateo, CA - Contractor
With respect, I've given up on the ocf:heartbeat:apache module.

I've set up my Apache resource with:

# systemctl disable apache2
# crm configure primitive ncoa_apache systemd:apache2
# crm configure modgroup grp_ncoa add ncoa_apache

# crm configure show ncoa_apache
primitive ncoa_apache systemd:apache2
#

Apache doesn't start until the cluster is up.When the cluster starts, 
Apache starts up on the active node.  The webserver migrates with the cluster 
when I move it from one node to another.  That's really all I want.

Tell me why this is a bad idea.

What options or other configurations should I add to the primitive?  Please 
give the command syntax.

John Reynolds


___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] -INFINITY location constraint not honored?

2019-10-18 Thread Raffaele Pantaleoni


Il 18/10/2019 10:21, Andrei Borzenkov ha scritto:

According to it, you have symmetric cluster (and apparently made typo
trying to change it)

 
 


That's correct. I tried both approaches, opt in explicitly allowing nodes to be 
bound to resources. Then I tried the opt out strategy denying resource to be 
bound to some nodes. Both approaches failed though. My fault? It could be, 
obviously. But is simply follwed the directions found in Clusters from scratch 
and Configuration Explained.
The main issue is: pcs constraint shows an apparently correct constrainted 
configuration, crm_simulate doesn't agree on such constraints on the other 
hand. :D

On Fri, Oct 18, 2019 at 10:29 AM Raffaele Pantaleoni
 wrote:
>
> Il 17/10/2019 18:08, Ken Gaillot ha scritto:
>> This does sound odd, possibly a bug. Can you provide the output of "pcs
>> cluster cib" when one of the unexpected results is happening? (Strip
>> out any passwords or other sensitive information, and you can e-mail it
>> to me privately if you don't want it on the list.)
>>
> There's no problem at all in sharing the informations. I'm working on a
> sandbox to test Pacemaker and then use it on a production set to achieve
> high availability and fault tolerance, so I'm working on dummy machines
> inside our internal network.
> (meanwhile I added three nodes more, but the behaviour has not changed)
>
> Thanks
>> On Thu, 2019-10-17 at 16:55 +0200, Raffaele Pantaleoni wrote:
>>> Il 17/10/2019 16:47, Raffaele Pantaleoni ha scritto:
 Il 17/10/2019 14:51, Jan Pokorný ha scritto:
> On 17/10/19 08:22 +0200, Raffaele Pantaleoni wrote:
>> I'm rather new to Pacemaker, I'm performing early tests on a
>> set of
>> three virtual machines.
>>
>> I am configuring the cluster in the following way:
>>
>> 3 nodes configured
>> 4 resources configured
>>
>> Online: [ SRVDRSW01 SRVDRSW02 SRVDRSW03 ]
>>
>> ClusterIP  (ocf::heartbeat:IPaddr2):   Started
>> SRVDRSW01
>> CouchIP    (ocf::heartbeat:IPaddr2):   Started
>> SRVDRSW03
>> FrontEnd   (ocf::heartbeat:nginx): Started SRVDRSW01
>> ITATESTSERVER-DIP  (ocf::nodejs:pm2):  Started
>> SRVDRSW01
>>
>> with the following constraints:
>>
>> Location Constraints:
>>  Resource: ClusterIP
>>    Enabled on: SRVRDSW01 (score:200)
>>    Enabled on: SRVRDSW02 (score:100)
>>  Resource: CouchIP
>>    Enabled on: SRVRDSW02 (score:10)
>>    Enabled on: SRVRDSW03 (score:100)
>>    Disabled on: SRVRDSW01 (score:-INFINITY)
>>  Resource: FrontEnd
>>    Enabled on: SRVRDSW01 (score:200)
>>    Enabled on: SRVRDSW02 (score:100)
>>  Resource: ITATESTSERVER-DIP
>>    Enabled on: SRVRDSW01 (score:200)
>>    Enabled on: SRVRDSW02 (score:100)
>> Ordering Constraints:
>>  start ClusterIP then start FrontEnd (kind:Mandatory)
>>  start ClusterIP then start ITATESTSERVER-DIP
>> (kind:Mandatory)
>> Colocation Constraints:
>>  FrontEnd with ClusterIP (score:INFINITY)
>>  FrontEnd with ITATESTSERVER-DIP (score:INFINITY)
>>
>> I've configured the cluster with an opt in strategy using the
>> following
>> commands:
>>
>> crm_attribute --name symmectric-cluster --update false
>>
>> pcs resource create ClusterIP ocf:heartbeat:IPaddr2
>> ip=172.16.10.126
>> cidr_netmask=16 op monitor interval=30s
>> pcs resource create CouchIP ocf:heartbeat:IPaddr2
>> ip=172.16.10.128
>> cidr_netmask=16 op monitor interval=30s
>> pcs resource create FrontEnd ocf:heartbeat:nginx
>> configfile=/etc/nginx/nginx.conf
>> pcs resource create ITATESTSERVER-DIP ocf:nodejs:pm2
>> user=ITATESTSERVER
>> --force
>>
>> pcs constraint colocation add FrontEnd with ClusterIP INFINITY
>> pcs constraint colocation add FrontEnd with ITATESTSERVER-DIP
>> INFINITY
>>
>> pcs constraint order ClusterIP then FrontEnd
>> pcs constraint order ClusterIP then ITATESTSERVER-DIP
>>
>> pcs constraint location ClusterIP prefers SRVRDSW01=200
>> pcs constraint location ClusterIP prefers SRVRDSW02=100
>>
>> pcs constraint location CouchIP prefers SRVRDSW02=10
>> pcs constraint location CouchIP prefers SRVRDSW03=100
>>
>> pcs constraint location FrontEnd prefers SRVRDSW01=200
>> pcs constraint location FrontEnd prefers SRVRDSW02=100
>>
>> pcs constraint location ITATESTSERVER-DIP prefers SRVRDSW01=200
>> pcs constraint location ITATESTSERVER-DIP prefers SRVRDSW02=100
>>
>> Everything seems to be ok but when I put vm 02 and 03 in
>> standby I'd
>> expect
>> CouchIP not be assigned to vm 01 beacuse of the constraint.
>>
>> The IPaddr2 resource gets assigned to vm 01 no matter what.
>>
>> Node SRVDRSW02: standby
>> Node SRVDRSW03: 

Re: [ClusterLabs] -INFINITY location constraint not honored?

2019-10-18 Thread Andrei Borzenkov
According to it, you have symmetric cluster (and apparently made typo
trying to change it)




On Fri, Oct 18, 2019 at 10:29 AM Raffaele Pantaleoni
 wrote:
>
> Il 17/10/2019 18:08, Ken Gaillot ha scritto:
> > This does sound odd, possibly a bug. Can you provide the output of "pcs
> > cluster cib" when one of the unexpected results is happening? (Strip
> > out any passwords or other sensitive information, and you can e-mail it
> > to me privately if you don't want it on the list.)
> >
> There's no problem at all in sharing the informations. I'm working on a
> sandbox to test Pacemaker and then use it on a production set to achieve
> high availability and fault tolerance, so I'm working on dummy machines
> inside our internal network.
> (meanwhile I added three nodes more, but the behaviour has not changed)
>
> Thanks
> > On Thu, 2019-10-17 at 16:55 +0200, Raffaele Pantaleoni wrote:
> >> Il 17/10/2019 16:47, Raffaele Pantaleoni ha scritto:
> >>> Il 17/10/2019 14:51, Jan Pokorný ha scritto:
>  On 17/10/19 08:22 +0200, Raffaele Pantaleoni wrote:
> > I'm rather new to Pacemaker, I'm performing early tests on a
> > set of
> > three virtual machines.
> >
> > I am configuring the cluster in the following way:
> >
> > 3 nodes configured
> > 4 resources configured
> >
> > Online: [ SRVDRSW01 SRVDRSW02 SRVDRSW03 ]
> >
> >ClusterIP  (ocf::heartbeat:IPaddr2):   Started
> > SRVDRSW01
> >CouchIP(ocf::heartbeat:IPaddr2):   Started
> > SRVDRSW03
> >FrontEnd   (ocf::heartbeat:nginx): Started SRVDRSW01
> >ITATESTSERVER-DIP  (ocf::nodejs:pm2):  Started
> > SRVDRSW01
> >
> > with the following constraints:
> >
> > Location Constraints:
> > Resource: ClusterIP
> >   Enabled on: SRVRDSW01 (score:200)
> >   Enabled on: SRVRDSW02 (score:100)
> > Resource: CouchIP
> >   Enabled on: SRVRDSW02 (score:10)
> >   Enabled on: SRVRDSW03 (score:100)
> >   Disabled on: SRVRDSW01 (score:-INFINITY)
> > Resource: FrontEnd
> >   Enabled on: SRVRDSW01 (score:200)
> >   Enabled on: SRVRDSW02 (score:100)
> > Resource: ITATESTSERVER-DIP
> >   Enabled on: SRVRDSW01 (score:200)
> >   Enabled on: SRVRDSW02 (score:100)
> > Ordering Constraints:
> > start ClusterIP then start FrontEnd (kind:Mandatory)
> > start ClusterIP then start ITATESTSERVER-DIP
> > (kind:Mandatory)
> > Colocation Constraints:
> > FrontEnd with ClusterIP (score:INFINITY)
> > FrontEnd with ITATESTSERVER-DIP (score:INFINITY)
> >
> > I've configured the cluster with an opt in strategy using the
> > following
> > commands:
> >
> > crm_attribute --name symmectric-cluster --update false
> >
> > pcs resource create ClusterIP ocf:heartbeat:IPaddr2
> > ip=172.16.10.126
> > cidr_netmask=16 op monitor interval=30s
> > pcs resource create CouchIP ocf:heartbeat:IPaddr2
> > ip=172.16.10.128
> > cidr_netmask=16 op monitor interval=30s
> > pcs resource create FrontEnd ocf:heartbeat:nginx
> > configfile=/etc/nginx/nginx.conf
> > pcs resource create ITATESTSERVER-DIP ocf:nodejs:pm2
> > user=ITATESTSERVER
> > --force
> >
> > pcs constraint colocation add FrontEnd with ClusterIP INFINITY
> > pcs constraint colocation add FrontEnd with ITATESTSERVER-DIP
> > INFINITY
> >
> > pcs constraint order ClusterIP then FrontEnd
> > pcs constraint order ClusterIP then ITATESTSERVER-DIP
> >
> > pcs constraint location ClusterIP prefers SRVRDSW01=200
> > pcs constraint location ClusterIP prefers SRVRDSW02=100
> >
> > pcs constraint location CouchIP prefers SRVRDSW02=10
> > pcs constraint location CouchIP prefers SRVRDSW03=100
> >
> > pcs constraint location FrontEnd prefers SRVRDSW01=200
> > pcs constraint location FrontEnd prefers SRVRDSW02=100
> >
> > pcs constraint location ITATESTSERVER-DIP prefers SRVRDSW01=200
> > pcs constraint location ITATESTSERVER-DIP prefers SRVRDSW02=100
> >
> > Everything seems to be ok but when I put vm 02 and 03 in
> > standby I'd
> > expect
> > CouchIP not be assigned to vm 01 beacuse of the constraint.
> >
> > The IPaddr2 resource gets assigned to vm 01 no matter what.
> >
> > Node SRVDRSW02: standby
> > Node SRVDRSW03: standby
> > Online: [ SRVDRSW01 ]
> >
> > Full list of resources:
> >
> >ClusterIP  (ocf::heartbeat:IPaddr2):   Started
> > SRVDRSW01
> >CouchIP(ocf::heartbeat:IPaddr2):   Started
> > SRVDRSW01
> >FrontEnd   (ocf::heartbeat:nginx): Started SRVDRSW01
> >ITATESTSERVER-DIP  (ocf::nodejs:pm2):  Started
> > SRVDRSW01
> >
> > crm_simulate -sL returns the follwoing
> >
> > ---cut---
> 

Re: [ClusterLabs] -INFINITY location constraint not honored?

2019-10-18 Thread Raffaele Pantaleoni

Il 17/10/2019 18:08, Ken Gaillot ha scritto:

This does sound odd, possibly a bug. Can you provide the output of "pcs
cluster cib" when one of the unexpected results is happening? (Strip
out any passwords or other sensitive information, and you can e-mail it
to me privately if you don't want it on the list.)

There's no problem at all in sharing the informations. I'm working on a 
sandbox to test Pacemaker and then use it on a production set to achieve 
high availability and fault tolerance, so I'm working on dummy machines 
inside our internal network.

(meanwhile I added three nodes more, but the behaviour has not changed)

Thanks

On Thu, 2019-10-17 at 16:55 +0200, Raffaele Pantaleoni wrote:

Il 17/10/2019 16:47, Raffaele Pantaleoni ha scritto:

Il 17/10/2019 14:51, Jan Pokorný ha scritto:

On 17/10/19 08:22 +0200, Raffaele Pantaleoni wrote:

I'm rather new to Pacemaker, I'm performing early tests on a
set of
three virtual machines.

I am configuring the cluster in the following way:

3 nodes configured
4 resources configured

Online: [ SRVDRSW01 SRVDRSW02 SRVDRSW03 ]

   ClusterIP  (ocf::heartbeat:IPaddr2):   Started
SRVDRSW01
   CouchIP(ocf::heartbeat:IPaddr2):   Started
SRVDRSW03
   FrontEnd   (ocf::heartbeat:nginx): Started SRVDRSW01
   ITATESTSERVER-DIP  (ocf::nodejs:pm2):  Started
SRVDRSW01

with the following constraints:

Location Constraints:
Resource: ClusterIP
  Enabled on: SRVRDSW01 (score:200)
  Enabled on: SRVRDSW02 (score:100)
Resource: CouchIP
  Enabled on: SRVRDSW02 (score:10)
  Enabled on: SRVRDSW03 (score:100)
  Disabled on: SRVRDSW01 (score:-INFINITY)
Resource: FrontEnd
  Enabled on: SRVRDSW01 (score:200)
  Enabled on: SRVRDSW02 (score:100)
Resource: ITATESTSERVER-DIP
  Enabled on: SRVRDSW01 (score:200)
  Enabled on: SRVRDSW02 (score:100)
Ordering Constraints:
start ClusterIP then start FrontEnd (kind:Mandatory)
start ClusterIP then start ITATESTSERVER-DIP
(kind:Mandatory)
Colocation Constraints:
FrontEnd with ClusterIP (score:INFINITY)
FrontEnd with ITATESTSERVER-DIP (score:INFINITY)

I've configured the cluster with an opt in strategy using the
following
commands:

crm_attribute --name symmectric-cluster --update false

pcs resource create ClusterIP ocf:heartbeat:IPaddr2
ip=172.16.10.126
cidr_netmask=16 op monitor interval=30s
pcs resource create CouchIP ocf:heartbeat:IPaddr2
ip=172.16.10.128
cidr_netmask=16 op monitor interval=30s
pcs resource create FrontEnd ocf:heartbeat:nginx
configfile=/etc/nginx/nginx.conf
pcs resource create ITATESTSERVER-DIP ocf:nodejs:pm2
user=ITATESTSERVER
--force

pcs constraint colocation add FrontEnd with ClusterIP INFINITY
pcs constraint colocation add FrontEnd with ITATESTSERVER-DIP
INFINITY

pcs constraint order ClusterIP then FrontEnd
pcs constraint order ClusterIP then ITATESTSERVER-DIP

pcs constraint location ClusterIP prefers SRVRDSW01=200
pcs constraint location ClusterIP prefers SRVRDSW02=100

pcs constraint location CouchIP prefers SRVRDSW02=10
pcs constraint location CouchIP prefers SRVRDSW03=100

pcs constraint location FrontEnd prefers SRVRDSW01=200
pcs constraint location FrontEnd prefers SRVRDSW02=100

pcs constraint location ITATESTSERVER-DIP prefers SRVRDSW01=200
pcs constraint location ITATESTSERVER-DIP prefers SRVRDSW02=100

Everything seems to be ok but when I put vm 02 and 03 in
standby I'd
expect
CouchIP not be assigned to vm 01 beacuse of the constraint.

The IPaddr2 resource gets assigned to vm 01 no matter what.

Node SRVDRSW02: standby
Node SRVDRSW03: standby
Online: [ SRVDRSW01 ]

Full list of resources:

   ClusterIP  (ocf::heartbeat:IPaddr2):   Started
SRVDRSW01
   CouchIP(ocf::heartbeat:IPaddr2):   Started
SRVDRSW01
   FrontEnd   (ocf::heartbeat:nginx): Started SRVDRSW01
   ITATESTSERVER-DIP  (ocf::nodejs:pm2):  Started
SRVDRSW01

crm_simulate -sL returns the follwoing

---cut---

native_color: CouchIP allocation score on SRVDRSW01: 0
native_color: CouchIP allocation score on SRVDRSW02: 0
native_color: CouchIP allocation score on SRVDRSW03: 0

---cut---
Why is that? I have explicitly assigned -INFINITY to CouchIP
resource
related to node SRVDRSW01 (as stated by pcs constraint:
Disabled on:
SRVRDSW01 (score:-INFINITY) ).
What am I missing or doing wrong?


I am not that deep into these relationships, proper design
documentation with guided examples is non-existent[*].

But it occurs to me that the situation might be the inverse of
what's
been confusing for typical opt-out clusters:

https://lists.clusterlabs.org/pipermail/users/2017-April/005463.html

Have you tried avoiding:


Resource: CouchIP
  Disabled on: SRVRDSW01 (score:-INFINITY)


Yes, I already tried that, but I did it again nevertheless since I
am a
newbie. I deleted the whole set of resources and commented out the
constraint from the creation script.
The cluster was running, then I put all the nodes in standby and
brought