>
> Colocation constraints may take a "node-attribute" parameter, that
> basically means, "Put this resource on a node of the same class as the
> one running resource X".
>
> In this case, you might set a "group" node attribute on all nodes, to
> "1" on the three primary nodes and "2" on the three
>
> The first thing I'd mention is that a 6-node cluster can only survive
> the loss of two nodes, as 3 nodes don't have quorum. You can tweak that
> behavior with corosync quorum options, or you could add a quorum-only
> node, or use corosync's new qdevice capability to have an arbiter node.
>
>
On Wed, 2017-11-08 at 23:04 -0400, Alberto Mijares wrote:
> Hi guys, nice to say hello here.
>
> I've been assigned with a very particular task: There's a
> pacemaker-based cluster with 6 nodes. A system runs on three nodes
> (group A), while the other three are hot-standby spares (group B).
>
>
Hi guys, nice to say hello here.
I've been assigned with a very particular task: There's a
pacemaker-based cluster with 6 nodes. A system runs on three nodes
(group A), while the other three are hot-standby spares (group B).
Resources from group A are never supposed to me relocated individually