[Pacemaker] How to see default values
Hi all, which cli commands should I use to see my cluster default values? For example, how do I see which is the default action that the cluster will execute when the stop operation fails for a given resource? Thank you, Alberto ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
[Pacemaker] correct way to deploy a CLVM configuration with pacemaker
Hi all, I'm trying to deploy a CLVM configuration; my VGs will be active on only 1 node at time and I won't use a clustered fs but ext3. I configured clvmd and dlm in this way: primitive cluster-dlm ocf:pacemaker:controld op monitor interval="60" \ timeout="60" meta is-managed="true" primitive cluster-lvm ocf:lvm2:clvmd params daemon_timeout="30" \ meta is-managed="true" group cluster-base cluster-dlm cluster-lvm meta is-managed="true" clone cluster-infra cluster-base meta \ interleave="true" is-managed="true" Suppose now that I want to configure a resource to manage my VG, something like this: primitive wfq-lv-rs ocf:heartbeat:LVM params \ volgrpname="WFQ_vg" exclusive="yes" op start interval="0" \ op monitor interval="120s" timeout="60s" op stop \ interval="0" timeout="30s" meta is-managed="true" I think that my LVM resource should be someway dependant from cluster-infra; in my opinion the following "dependencies" should be honored: 1. the resource who manage the VG, wfq-lv-rs, must be started only after the resource who manage the CLVM 2. because the resource who manage the CLVM is inside a clone resource and will be started in all nodes, the wfq-lv-rs must be started only in a node who has the clone resource containing the CLVM resource online. If the above assumptions are correct, how is it possible to manage this in pacemaker? Thank you, Alberto ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [Pacemaker] fencing best practices for virtual environments
Hi Lars, thank you very much for the deep explanation. Regards, Alberto On 09/10/2012 03:42 PM, Lars Marowsky-Bree wrote: On 2012-09-10T14:40:43, Alberto Menichetti wrote: Sorry, maybe I'm missing something, but suppose this scenario (also remember that, being a 2-node cluster, I had to set no-quorum-policy="ignore"): 1. the virtual center is unavailable 2. an event occurs that partition the cluster 3. at this point, both the nodes could try to start a filesystem resource, thus compromising the data safety. Because of 1, the nodes cannot fence, but will not start resources without a successful fence completion. Hence, in the case of a network partition with unavailable fencing setup and no-quorum-policy=ignore, resources will continue to run where they were running before the partition. (Which is the best one could hope for anyway.) If there's a real outage of one of the nodes, *and* the vcenter is down, then the surviving node won't take over because it can't fence. That leaves your data intact and the service down. Regards, Lars -- TAI S.r.l. Alberto Menichetti Area Mercato - Ingegneria dei Sistemi System Engineer 50141 Firenze - Via Pazzagli, 2 Voice: +39 055 42661 - Fax +39 055 4266356 56125 Pisa - Viale Gramsci, 12 Voice: +39 050 220221 - Fax: +39 050 24421 e-mail: alb.meniche...@tai.it http://www.tai.it --- COMUNICAZIONE AI SENSI LEGGE 196/03 Il presente messaggio di posta elettronica viene inviato al Vostro indirizzo email, che abbiamo acquisito da Vostre Visite, da incontri commerciali, elenchi di pubblico dominio, Vostre precedenti comunicazioni. Il Vostro dato in questione e' in possesso di TAI S.r.l., che lo ha immagazzinato in formato elettronico. Tali informazioni non saranno divulgate a terzi. Se desiderate verificare, cancellare o modificare i dati in nostro possesso, inviate fax al numero 0554266356. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [Pacemaker] fencing best practices for virtual environments
On 09/10/2012 12:15 PM, Lars Marowsky-Bree wrote: On 2012-09-10T10:45:30, Alberto Menichetti wrote: thank you for the quick response. Maybe SPOF is not the best definition, but when the vcenter is unavailable the safety of my data is not guaranteed. The safety remains guaranteed; the availability of your service wouldn't be ;-) Sorry, maybe I'm missing something, but suppose this scenario (also remember that, being a 2-node cluster, I had to set no-quorum-policy="ignore"): 1. the virtual center is unavailable 2. an event occurs that partition the cluster 3. at this point, both the nodes could try to start a filesystem resource, thus compromising the data safety. The fecing device I'd like to use is sdb. If you have a working SBD setup, you do not need the external/vcenter plugin any more. What about using 2 different fencing mechanisms? But why? It doesn't provide any benefit. Do you think it could introduce some problems in the cluster or is it a suggested/supported solution? I'd like to use external/vcenter as first choice and rely on sdb only if the first stonith mechanism fails (for example, because of the vcenter unavailability). Why? Regards, Lars -- TAI S.r.l. Alberto Menichetti Area Mercato - Ingegneria dei Sistemi System Engineer 50141 Firenze - Via Pazzagli, 2 Voice: +39 055 42661 - Fax +39 055 4266356 56125 Pisa - Viale Gramsci, 12 Voice: +39 050 220221 - Fax: +39 050 24421 e-mail: alb.meniche...@tai.it http://www.tai.it --- COMUNICAZIONE AI SENSI LEGGE 196/03 Il presente messaggio di posta elettronica viene inviato al Vostro indirizzo email, che abbiamo acquisito da Vostre Visite, da incontri commerciali, elenchi di pubblico dominio, Vostre precedenti comunicazioni. Il Vostro dato in questione e' in possesso di TAI S.r.l., che lo ha immagazzinato in formato elettronico. Tali informazioni non saranno divulgate a terzi. Se desiderate verificare, cancellare o modificare i dati in nostro possesso, inviate fax al numero 0554266356. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [Pacemaker] fencing best practices for virtual environments
On 09/09/2012 09:53 PM, Lars Marowsky-Bree wrote: On 2012-09-09T13:30:36, Alberto Menichetti wrote: I've successfully configured and tested the stonith plugin "external/vcenter"; but this plugin introduces a single point of failure in my cluster infrastructure because it depends on the availability of the virtual center (which is, in the customer environment, a virtual machine). It's not exactly a single point of failure, since you need two failures for this to matter - the first failure being the one that causes the fence, and a second one for the vcenter instance to be down at that time. Hi Lars, thank you for the quick response. Maybe SPOF is not the best definition, but when the vcenter is unavailable the safety of my data is not guaranteed. I was thinking to introduce an additional fencing device, to be used when the virtual center is unavailable; is this a suggested deployment? The fecing device I'd like to use is sdb. If you have a working SBD setup, you do not need the external/vcenter plugin any more. What about using 2 different fencing mechanisms? Do you think it could introduce some problems in the cluster or is it a suggested/supported solution? I'd like to use external/vcenter as first choice and rely on sdb only if the first stonith mechanism fails (for example, because of the vcenter unavailability). Regards, Lars Take care, Alberto -- TAI S.r.l. Alberto Menichetti Area Mercato - Ingegneria dei Sistemi System Engineer 50141 Firenze - Via Pazzagli, 2 Voice: +39 055 42661 - Fax +39 055 4266356 56125 Pisa - Viale Gramsci, 12 Voice: +39 050 220221 - Fax: +39 050 24421 e-mail: alb.meniche...@tai.it http://www.tai.it --- COMUNICAZIONE AI SENSI LEGGE 196/03 Il presente messaggio di posta elettronica viene inviato al Vostro indirizzo email, che abbiamo acquisito da Vostre Visite, da incontri commerciali, elenchi di pubblico dominio, Vostre precedenti comunicazioni. Il Vostro dato in questione e' in possesso di TAI S.r.l., che lo ha immagazzinato in formato elettronico. Tali informazioni non saranno divulgate a terzi. Se desiderate verificare, cancellare o modificare i dati in nostro possesso, inviate fax al numero 0554266356. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
[Pacemaker] fencing best practices for virtual environments
Hi all, I'm setting up a two-node pacemaker cluster (SLES-HA Extension) on vmware vsphere 5. I've successfully configured and tested the stonith plugin "external/vcenter"; but this plugin introduces a single point of failure in my cluster infrastructure because it depends on the availability of the virtual center (which is, in the customer environment, a virtual machine). I was thinking to introduce an additional fencing device, to be used when the virtual center is unavailable; is this a suggested deployment? The fecing device I'd like to use is sdb. Are there some best practices or validated configurations for a deploy like this? Thank you. Alberto ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org