[ClusterLabs] Running several instances of a Corosync/Pacemaker cluster on a node

2016-04-26 Thread Bogdan Dobrelya
Is it possible to run several instances of a Corosync/Pacemaker clusters
on a node? Can a node be a member of several clusters, so they could put
resources there? I'm sure it's doable with separate nodes or containers,
but that's not the case.

My case is to separate data-critical resources, like storage or VIPs,
from the complex resources like DB or MQ clusters.

The latter should run with no-quorum-policy=ignore as they know how to
deal with network partitions/split-brain, use own techniques to protect
data and don't want external fencing from a Pacemaker, which
no-quorum-policy/STONITH is.

The former must use STONITH (or a stop policy, if it's only a VIP), as
they don't know how to deal with split-brain, for example.

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Running several instances of a Corosync/Pacemaker cluster on a node

2016-04-26 Thread Robert Dahlem
On 26.04.2016 10:33, Bogdan Dobrelya wrote:

> Is it possible to run several instances of a Corosync/Pacemaker clusters
> on a node? Can a node be a member of several clusters, so they could put
> resources there? I'm sure it's doable with separate nodes or containers,
> but that's not the case.
> 
> My case is to separate data-critical resources, like storage or VIPs,
> from the complex resources like DB or MQ clusters.
> 
> The latter should run with no-quorum-policy=ignore as they know how to
> deal with network partitions/split-brain, use own techniques to protect
> data and don't want external fencing from a Pacemaker, which
> no-quorum-policy/STONITH is.
> 
> The former must use STONITH (or a stop policy, if it's only a VIP), as
> they don't know how to deal with split-brain, for example.

And how would you cope with one of the storage nodes STONITHing one of
the DB nodes?

Regards,
Robert

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Running several instances of a Corosync/Pacemaker cluster on a node

2016-04-26 Thread Michael Schwartzkopff
Am Dienstag, 26. April 2016, 10:33:00 schrieb Bogdan Dobrelya:
> Is it possible to run several instances of a Corosync/Pacemaker clusters
> on a node? Can a node be a member of several clusters, so they could put
> resources there? I'm sure it's doable with separate nodes or containers,
> but that's not the case.
> 
> My case is to separate data-critical resources, like storage or VIPs,
> from the complex resources like DB or MQ clusters.
> 
> The latter should run with no-quorum-policy=ignore as they know how to
> deal with network partitions/split-brain, use own techniques to protect
> data and don't want external fencing from a Pacemaker, which
> no-quorum-policy/STONITH is.
> 
> The former must use STONITH (or a stop policy, if it's only a VIP), as
> they don't know how to deal with split-brain, for example.

that is not possible. corosync and pacemaker are not able to run as multi-
tennen.

Use docker or plain linux partitions (lxc).

Mit freundlichen Grüßen,

Michael Schwartzkopff

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64, +49 (162) 165 0044
Franziskanerstraße 15, 81669 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein

signature.asc
Description: This is a digitally signed message part.
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Running several instances of a Corosync/Pacemaker cluster on a node

2016-04-26 Thread Bogdan Dobrelya
On 04/26/2016 10:49 AM, Robert Dahlem wrote:
> On 26.04.2016 10:33, Bogdan Dobrelya wrote:
> 
>> Is it possible to run several instances of a Corosync/Pacemaker clusters
>> on a node? Can a node be a member of several clusters, so they could put
>> resources there? I'm sure it's doable with separate nodes or containers,
>> but that's not the case.
>>
>> My case is to separate data-critical resources, like storage or VIPs,
>> from the complex resources like DB or MQ clusters.
>>
>> The latter should run with no-quorum-policy=ignore as they know how to
>> deal with network partitions/split-brain, use own techniques to protect
>> data and don't want external fencing from a Pacemaker, which
>> no-quorum-policy/STONITH is.
>>
>> The former must use STONITH (or a stop policy, if it's only a VIP), as
>> they don't know how to deal with split-brain, for example.
> 
> And how would you cope with one of the storage nodes STONITHing one of
> the DB nodes?

For that case, one should use separate clusters and sets of nodes.
My case is like a VIP and a DB resources only. When the VIP has to be
stopped in a minority partition, there is no need for STONITH but the
no-quorum-policy=stop. While the DB resource should not be affected and
keep running at the same node, either in a separate cluster - if that
were possible - or in the same cluster - if quorum policy allowed to
"exclude" resources from the stop lists. Too bad none of two is an option.

> 
> Regards,
> Robert
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Running several instances of a Corosync/Pacemaker cluster on a node

2016-05-02 Thread Ken Gaillot
On 04/26/2016 03:33 AM, Bogdan Dobrelya wrote:
> Is it possible to run several instances of a Corosync/Pacemaker clusters
> on a node? Can a node be a member of several clusters, so they could put
> resources there? I'm sure it's doable with separate nodes or containers,
> but that's not the case.
> 
> My case is to separate data-critical resources, like storage or VIPs,
> from the complex resources like DB or MQ clusters.
> 
> The latter should run with no-quorum-policy=ignore as they know how to
> deal with network partitions/split-brain, use own techniques to protect
> data and don't want external fencing from a Pacemaker, which
> no-quorum-policy/STONITH is.
> 
> The former must use STONITH (or a stop policy, if it's only a VIP), as
> they don't know how to deal with split-brain, for example.

I don't think it's possible, though I could be wrong, if separate
IPs/ports, chroots and node names are used (just shy of a container ...).

However I suspect it would not meet your goal in any case. DB and MQ
software generally do NOT have sufficient techniques to deal with a
split-brain situation -- either you lose high availability or you
corrupt data. Using no-quorum-policy=stop is fine for handling network
splits, but it does not help if a node becomes unresponsive.

Also note that pacemaker does provide the ability to treat different
resources differently with respect to quorum and fencing, without
needing to run separate clusters. See the "required" meta-attribute:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#_resource_meta_attributes

I suspect your motive for this is to be able to run a cluster without
fencing. There are certain failure scenarios that simply are not
recoverable without fencing, regardless of what the application software
can do. There is really only one case in which doing without fencing is
reasonable: when you're willing to lose your data and/or have downtime
when a situation arises that requires fencing.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org