Hi,

Dne 11. 05. 21 v 17:31 Andrei Borzenkov napsal(a):
On 11.05.2021 18:20, Alastair Basden wrote:
Hi,

So, I think the following would do it:
pcs constraint location resourceClone rule role=master score=100 uname
eq node1
pcs constraint location resourceClone rule role=master score=50 uname eq
node2


Single location constraint may have multiple rules, I would assume pcs
supports it. It is certainly supported by crmsh.

Yes, it is supported by pcs. First, create a location rule constraint with 'pcs constraint location ... rule'. Then you can add more rules to it with 'pcs constraint rule add' command.

But, I'm unsure about uname.  In the xml example you showed, it had
#uname.  Is that correct, or do I use uname without the hash?


This should be #uname - it is special attribute.

Correct, the attribute is #uname. You just need to prevent your shell from interpreting the # sign. Either do \#uname or '#uname'.

Regards,
Tomas

So perhaps:
pcs constraint location resourceClone rule role=master score=50 \#uname
eq node2

Cheers,
Alastair.

On Tue, 11 May 2021, Andrei Borzenkov wrote:

[EXTERNAL EMAIL]

On Tue, May 11, 2021 at 10:50 AM Alastair Basden
<a.g.bas...@durham.ac.uk> wrote:

Hi Andrei, all,

So, what I want to achieve is that if both nodes are up, node1
preferentially has drbd as master.  If that node fails, then node2
should
become master.  If node1 then comes back online, it should become master
again.

I also want to avoid node3 and node4 ever running drbd, since they don't
have the disks.

For the link below about promotion scores, what is the pcs command to
achieve this?  I'm unfamiliar with where the xml goes...


I do not normally use PCS so am not familiar with its syntax. I assume
there should be documentation that describes how to define location
constraints with rules. Maybe someone who is familiar with it can
provide an example.



I notice that drbd9 has an auto promotion feature, perhaps that would
help
here, and so I can forget about configuring drbd in pacemaker?  Is that
how it is supposed to work?  i.e. I can just concentrate on the
overlying
file system.

Sorry that I'm being a bit slow about all this.

Thanks,
Alastair.

On Tue, 11 May 2021, Andrei Borzenkov wrote:

[EXTERNAL EMAIL]

On 10.05.2021 20:36, Alastair Basden wrote:
Hi Andrei,

Thanks.  So, in summary, I need to:
pcs resource create resourcedrbd0 ocf:linbit:drbd
drbd_resource=disk0 op
monitor interval=60s
pcs resource master resourcedrbd0Clone resourcedrbd0 master-max=1
master-node-max=1 clone-max=2 clone-node-max=1 notify=true

pcs constraint location resourcedrb0Clone prefers node1=100
pcs constraint location resourcedrb0Clone prefers node2=50
pcs constraint location resourcedrb0Clone avoids node3
pcs constraint location resourcedrb0Clone avoids node4

Does this mean that it will prefer to run as master on node1, and
slave
on node2?

No. I already told you so.

   If not, how can I achieve that?


DRBD resource agents sets master scores based on disk state. If you
statically override this decision you are risking promoting stale copy
which means data loss (I do not know if agent allows it, hopefully not;
but then it will continue to attempt to promote wrong copy and
eventually fail). But if you insist, it is documented:

https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Pacemaker_Explained/s-promotion-scores.html


Also statically biasing one single node means workload will be
relocated
every time node becomes available, which usually implies additional
downtime. That is something normally avoided (which is why resource
stickiness exists).
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to