Re: [Pacemaker] cold-start to standby?

2014-03-02 Thread Nikita Staroverov

01.03.2014 09:14, Matthew O'Connor wrotes:

Hi,

I have had a few instances recently where circumstances conspired to
bring my cluster down completely and most non-gracefully (and this was
in spite of a relatively new 10kVA UPS).  When bringing the nodes back
online, it would be enormously useful to me if they would go
automatically into standby when starting from an unknown state; that is,
when starting up with the realization that no active state exists (since
all the other nodes are down as well).

Is this possible in any of the current releases?

Thanks!!
-- Matthew



You can also do this  with native pacemaker commands.

# standby permanently
crm_attribute --node NODENAME --name standby --update on -l forever
# standby until reboot
crm_attribute --node NODENAME --name standby --update off -l reboot

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] ordering cloned resources

2014-03-02 Thread Alexandre
Hi,

I am setting up a cluster on debian wheezy.
I have installed pacemaker using the debian provided packages (so am
runing  1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff).

I have roughly 10 nodes, among which some nodes are acting as SAN
(exporting block devices using AoE protocol) and others nodes acting
as initiators (they are actually mail servers, storing emails on the
exported devices).
Bellow are the defined resources for those nodes:

xml  \
 \
 \
 \
 \
 \
 \
 \
 \
 \
 \
 \
 \
 \
 \
 \
 \
 \
 \

primitive pri_dovecot lsb:dovecot \
op start interval="0" timeout="20" \
op stop interval="0" timeout="30" \
op monitor interval="5" timeout="10"
primitive pri_spamassassin lsb:spamassassin \
op start interval="0" timeout="50" \
op stop interval="0" timeout="60" \
op monitor interval="5" timeout="20"
group grp_aoe pri_aoe1
group grp_mailstore pri_dlm pri_clvmd pri_spamassassin pri_dovecot
clone cln_mailstore grp_mailstore \
meta ordered="false" interleave="true" clone-max="2"
clone cln_san grp_aoe \
meta ordered="true" interleave="true" clone-max="2"

As I am in an "opt-in cluster" mode (symmetric-cluster="false"), I
have the location constraints bellow for those hosts:

location LOC_AOE_ETHERD_1 cln_san inf: sanaoe01
location LOC_AOE_ETHERD_2 cln_san inf: sanaoe02
location LOC_MAIL_STORE_1 cln_mailstore inf: ms01
location LOC_MAIL_STORE_2 cln_mailstore inf: ms02

So far so good. I want to make sure the initiators won't try to search
for exported devices before the targets actually exported them. To do
so, I though I could use the following ordering constraint:

order ORD_SAN_MAILSTORE inf: cln_san cln_mailstore

Unfortunately if I add this constraint the clone Set "cln_mailstore"
never starts (or even stops if started when I add the constraint).

Is there something wrong with this ordering rule?
Where can i find informations on what's going on?

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org