Hi,

On Wed, Feb 24, 2010 at 10:45:16AM -0800, Bob Schatz wrote:
> Dejan,
> 
> 
> Sorry for the delay in responding.
> 
> Comments inline with [BS] and questions with "QUESTION #"
> 
> 
> Thanks!
> 
> Bob
> 
> 
> 
> ----- Original Message ----
> From: Dejan Muhamedagic <[email protected]>
> To: General Linux-HA mailing list <[email protected]>
> Sent: Fri, February 19, 2010 4:29:01 AM
> Subject: Re: [Linux-HA] Command line option to fail back a master/slave 
> resource
> 
> Hi,
> 
> On Thu, Feb 18, 2010 at 05:18:21PM -0800, Bob Schatz wrote:
> > Thanks Marian and Dejan!
> > 
> > I did these steps for fail back:
> > 
> > # crm resource meta SS0 delete target-role
> > # crm resource demote ms-SS0
> > # crm resource promote ms-SS0
> > 
> > I noticed that if I type to fast between the "demote" and the "promote" 
> > then the ms-SS0
> > ends going MASTER->SLAVE->MASTER on the same node.
> 
> Are you sure that the transition actually takes place? Maybe the
> second command aborts it half way.
> 
> BS]  Thanks. I think that they are being aborted.  I see this in ha-debug 
> from the DC (log is attached):
> 
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: debug: te_update_diff: Processing 
> diff (cib_delete): 0.395.287 -> 0.396.1 (S_IDLE)
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: debug: get_xpath_object: No match 
> for //cib_update_result//diff-added//crm_config in 
> /notify/cib_update_result/diff
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: info: abort_transition_graph: 
> need_abort:59 - Triggered transition abort (complete=1) : Non-status change
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: info: need_abort: Aborting on change 
> to admin_epoch
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: debug: s_crmd_fsa: Processing 
> I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: info: do_state_transition: State 
> transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL 
> origin=abort_transition_graph ]
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: info: do_state_transition: All 2 
> cluster nodes are eligible to run resources.
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: debug: do_fsa_action: actions:trace: 
> // A_DC_TIMER_STOP
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: debug: do_fsa_action: actions:trace: 
> // A_INTEGRATE_TIMER_STOP
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: debug: do_fsa_action: actions:trace: 
> // A_FINALIZE_TIMER_STOP
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: debug: do_fsa_action: actions:trace: 
> // A_PE_INVOKE
> Feb 19 16:43:17 fc12-64-2 crmd: [20062]: info: do_pe_invoke: Query 188: 
> Requesting the current CIB: S_POLICY_ENGINE
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: - 
> <cib admin_epoch="0" epoch="395" num_updates="287" >
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -  
> <configuration >
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>  <resources >
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>    <master id="ms-SS0" >
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>      <primitive id="SS0" >
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>        <meta_attributes id="SS0-meta_attributes" >
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>          <nvpair id="SS0-meta_attributes-target-role" name="target-role" 
> value="Started" __crm_diff_marker__="removed:top" />
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>        </meta_attributes>
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>      </primitive>
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>    </master>
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>  </resources>
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -  
> </configuration>
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: - 
> </cib>
> Feb 19 16:43:17 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: + 
> <cib admin_epoch="0" epoch="396" num_updates="1" />
> 
> and this:
> 
> Feb 19 16:43:21 fc12-64-2 lrmd: [20059]: info: rsc:SS0:1:112: demote
> Feb 19 16:43:21 fc12-64-2 crmd: [20062]: debug: run_graph: Transition 13 
> (Complete=6, Pending=1, Fired=1, Skipped=0, Incomplete=6, 
> Source=/var/lib/pengine/pe-input-0.bz2): In-progress
> Feb 19 16:43:21 fc12-64-2 crmd: [20062]: debug: te_update_diff: Processing 
> diff (cib_modify): 0.397.4 -> 0.398.1 (S_TRANSITION_ENGINE)
> Feb 19 16:43:21 fc12-64-2 crmd: [20062]: debug: get_xpath_object: No match 
> for //cib_update_result//diff-added//crm_config in 
> /notify/cib_update_result/diff
> Feb 19 16:43:21 fc12-64-2 crmd: [20062]: info: abort_transition_graph: 
> need_abort:59 - Triggered transition abort (complete=0) : Non-status change
> Feb 19 16:43:21 fc12-64-2 crmd: [20062]: info: update_abort_priority: Abort 
> priority upgraded from 0 to 1000000
> Feb 19 16:43:21 fc12-64-2 crmd: [20062]: info: update_abort_priority: Abort 
> action done superceeded by restart
> Feb 19 16:43:21 fc12-64-2 crmd: [20062]: info: need_abort: Aborting on change 
> to admin_epoch
> Feb 19 16:43:21 fc12-64-2 crmd: [20062]: debug: run_graph: Transition 13 
> (Complete=6, Pending=1, Fired=0, Skipped=1, Incomplete=5, 
> Source=/var/lib/pengine/pe-input-0.bz2): In-progress
> Feb 19 16:43:21 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: - 
> <cib admin_epoch="0" epoch="397" num_updates="4" >
> Feb 19 16:43:21 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -  
> <configuration >
> Feb 19 16:43:21 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>  <resources >
> Feb 19 16:43:21 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>    <master id="ms-SS0" >
> Feb 19 16:43:21 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>      <meta_attributes id="ms-SS0-meta_attributes" >
> Feb 19 16:43:21 fc12-64-2 cib: [20058]: info: log_data_element: cib:diff: -   
>        <nvpair value="Slss[5379]:2010/02/19_16:43:22 DEBUG: ss_demote() START 
> SS0 
> ss[5379]:2010/02/19_16:43:22 DEBUG: ss_status() START SS0
> 
> QUESTION #1: Wouldn't the "crm" commands block until the action
> (delete/demote/promote) complete?  Is there a blocking option?

No, but you can open an enhancement bugzilla. What you did was:
state0 -> state1 -> state0, and the cluster remains in state0
which is correct.

> QUESTION #2:  Why would these operations abort?  They are serial commands?

We have to ask Andrew about that.

> QUESTION #3: I noticed in the attached log that the "pre_notify
> for demote" function is called in the agent and then the
> operation is aborted
> before the "promote" function is called in the agent.  I assume
> this is expected since an operation can be aborted at any time. 

Yes.

> > I do not know if this is expected or not.
> 
> Probably not. BTW, did you try resource cleanup (in case a
> resource failed)?
> 
> 
> [BS] I did not and I have not yet tried that.  I did look at
> "crm configure show" and I did not see any state which looked
> like it needed cleanup.

You'd see that with crm_mon. But obviously it's not the case here.

> QUESTION #4: Would "crm configure show" show all of the
> transient state that a "resource cleanup" clean? 

No, show shows only the static configuration. The status is kept
elsewhere. The important stuff you can see with crm_mon (or crm
status).

> > Am I doing the correct command sequence?
> > 
> > Also, if I remove the "target-role" from the primitive permanently then it 
> > appears that
> > the master resources are not balanced across the cluster.  Is this expected?
> 
> No. There must be another reason. You can use ptest scores to
> check placement.
> 
> 
> 
> [BS] I have not retested this yet since I was trying to figure
> out the other issue.  However, I did run a ptest after the
> above command sequence and I see:
> 
> Allocation scores:
> clone_color: ms-SS0 allocation score on fc12-64-1: 100
> clone_color: ms-SS0 allocation score on fc12-64-2: 0
> clone_color: SS0:0 allocation score on fc12-64-1: 601
> clone_color: SS0:0 allocation score on fc12-64-2: 0
> clone_color: SS0:1 allocation score on fc12-64-1: 100
> clone_color: SS0:1 allocation score on fc12-64-2: 1001
> native_color: SS0:1 allocation score on fc12-64-1: 100
> native_color: SS0:1 allocation score on fc12-64-2: 1001
> native_color: SS0:0 allocation score on fc12-64-1: 601
> native_color: SS0:0 allocation score on fc12-64-2: -1000000

Is this score OK? Do you have both master and slave running?

> SS0:1 promotion score on fc12-64-2: 1000000
> SS0:0 promotion score on fc12-64-1: 1000000

> QUESTION #4:  The promotion score for SS0:0 and SS0:1 is
> 1000000. How does the CRM use the "location" attribute when
> making a decision and the scores are the same? 

I think it's pseudo-random. Not intentionally though. CRM also
tries to balance running resources between nodes.

BTW, you have quite a few resource operations, you may want to
increase the lrmd max-children count (the number of allowed
parallel operations on one node). That should be actually
supported by the heartbeat init script, but I think it isn't. You
can open an enhancement bugzilla for that too.

Thanks,

Dejan

> Thanks,
> 
> Bob
> 
> Thanks,
> 
> Dejan
> 
> 
> > Thanks very much!
> > 
> > Bob
> > 
> > 
> > ----- Original Message ----
> > From: Dejan Muhamedagic <[email protected]>
> > To: General Linux-HA mailing list <[email protected]>
> > Sent: Thu, February 18, 2010 7:28:29 AM
> > Subject: Re: [Linux-HA] Command line option to fail back a master/slave 
> > resource
> > 
> > Hi Marian,
> > 
> > On Thu, Feb 18, 2010 at 03:18:59PM +0100, Dejan Muhamedagic wrote:
> > > Hi,
> > > 
> > > On Thu, Feb 18, 2010 at 03:50:28PM +0200, Marian Marinov wrote:
> > > > On Thursday 18 February 2010 15:23:13 Dejan Muhamedagic wrote:
> > > > > Hi,
> > > > > 
> > > > > On Thu, Feb 18, 2010 at 03:14:05PM +0200, Marian Marinov wrote:
> > > > > > I had almost identical problem as you.
> > > > > >
> > > > > > I'm currently working on a solution for this problem. I hope next 
> > > > > > week
> > > > > > I'll have that part finished and I'll file the enhancement bugzilla 
> > > > > >  for
> > > > > > that feature.
> > > > > 
> > > > > You mean you're preparing a patch?
> > > > 
> > > > Yup, I already asked you what is the right way of handling the 
> > > > situation so I 
> > > > decided to fix it. 
> > > 
> > > Great! This will be more or less the first patch for crm from
> > > the community.
> > 
> > There is already an enhancement bugzilla for this:
> > 
> > http://developerbugs.linux-foundation.org/show_bug.cgi?id=2315
> > 
> > Probably the most straightforward way is to use xpath (I think
> > it's called like that). Note that the shell shouldn't create
> > target-roles indiscriminately, probably better to remove all of
> > those beneath the top level resource (clone/group).
> > 
> > Thanks,
> > 
> > Dejan
> > 
> > 
> > > Cheers,
> > > 
> > > Dejan
> > > 
> > > > But currently I'm still testing the replication awareness of the mysql 
> > > > RA. 
> > > > After I finish with it I'll continue with the crm.
> > > > 
> > > > Regards,
> > > > Marian
> > > > 
> > > > > 
> > > > > > For now, what you can do is remove the target-role meta attribute 
> > > > > > from
> > > > > > the resources you want to promote using:
> > > > > > crm> resource meta RESOUCE_PRIMITIVE delete target-role
> > > > > 
> > > > > Oh, completely forgot that it can be done this way too. Thanks
> > > > > for mentioning it.
> > > > > 
> > > > > Dejan
> > > > > 
> > > > > > You have to remove the target role only from the local primitive 
> > > > > > not from
> > > > > > the clone.
> > > > > >
> > > > > > This is how we deal with the problem.
> > > > > >
> > > > > > Regards,
> > > > > > Marian
> > > > > >
> > > > > > On Thursday 18 February 2010 14:56:03 Dejan Muhamedagic wrote:
> > > > > > > Hi,
> > > > > > >
> > > > > > > On Tue, Feb 16, 2010 at 08:01:10PM -0800, Bob Schatz wrote:
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > I have configured 8 master/slave resources on two virtual 
> > > > > > > > machines
> > > > > > > > named fc12-64-1 and fc12-64-2.
> > > > > > > >
> > > > > > > > They are running fedora core 12 64 bit with the version of 
> > > > > > > > pacemaker
> > > > > > > > from a "yum install pacemaker" or: Name       : heartbeat
> > > > > > > >    Version    : 3.0.0
> > > > > > > >
> > > > > > > >    Name       : pacemaker
> > > > > > > >    Version    : 1.0.5
> > > > > > > >
> > > > > > > >   Name       : pacemaker-libs
> > > > > > > >   Version    : 1.0.5
> > > > > > > >
> > > > > > > >   Name       : cluster-glue
> > > > > > > >   Version    : 1.0
> > > > > > > >
> > > > > > > >   Name       : cluster-glue-libs
> > > > > > > >   Version    : 1.0
> > > > > > > >
> > > > > > > > My requirements are:
> > > > > > > >
> > > > > > > > 1.Four resources start as master on each node and the slave for 
> > > > > > > > each
> > > > > > > > resource starts on the other node - i.e.
> > > > > > > >
> > > > > > > > SS0 (master) on fc12-64-1
> > > > > > > > SS0 (slave) on fc12-64-2
> > > > > > > > 2.After a failover (power off one node) I have all 8 resources
> > > > > > > > running as master on one node.
> > > > > > > >
> > > > > > > > 3.I do not want automatic failback when a node comes back.  I 
> > > > > > > > only
> > > > > > > > want it to occur under operator control.
> > > > > > > >
> > > > > > > > 4.If the process associated with each resource dies, it will be
> > > > > > > > restarted as a slave and the other node will convert the 
> > > > > > > > process to
> > > > > > > > master.
> > > > > > > >
> > > > > > > > My problem:
> > > > > > > >
> > > > > > > > I am able to start both nodes and have four masters on each 
> > > > > > > > node with
> > > > > > > > a slave on the second node.  Also, failover works as expected - 
> > > > > > > > all 8
> > > > > > > > resources are master on the remaining node if one node dies and 
> > > > > > > > if I
> > > > > > > > have process death I only failover the one resource.
> > > > > > > >
> > > > > > > > However, I am not sure how I can cause a fail back to occur 
> > > > > > > > from the
> > > > > > > > command line (requirement #3)
> > > > > > > >
> > > > > > > > I start with this to figure out how to do a failback:
> > > > > > > >
> > > > > > > > # crm_mon -n
> > > > > > > >
> > > > > > > > ============
> > > > > > > > Last updated: Tue Feb 16 19:44:12 2010
> > > > > > > > Stack: Heartbeat
> > > > > > > > Current DC: fc12-64-1 (d7b30d08-d835-4014-b9c6-ebf53099cbe3) -
> > > > > > > > partition with quorum Version:
> > > > > > > > 1.0.5-ee19d8e83c2a5d45988f1cee36d334a631d84fc7 2 Nodes 
> > > > > > > > configured,
> > > > > > > > unknown expected votes
> > > > > > > > 8 Resources configured.
> > > > > > > > ============
> > > > > > > >
> > > > > > > > Node fc12-64-1 (d7b30d08-d835-4014-b9c6-ebf53099cbe3): online
> > > > > > > >         SS6:0   (ocf::omneon:ss) Master
> > > > > > > >        SS3:0   (ocf::omneon:ss) Master
> > > > > > > >         SS7:0   (ocf::omneon:ss) Master
> > > > > > > >         SS0:0   (ocf::omneon:ss) Master
> > > > > > > >         SS4:0   (ocf::omneon:ss) Master
> > > > > > > >         SS1:0   (ocf::omneon:ss) Master
> > > > > > > >         SS5:0   (ocf::omneon:ss) Master
> > > > > > > >         SS2:0   (ocf::omneon:ss) Master
> > > > > > > > Node fc12-64-2 (b69df3a6-a630-4edb-adf4-28727f8c1222): online
> > > > > > > >         SS0:1   (ocf::omneon:ss) Slave
> > > > > > > >         SS2:1   (ocf::omneon:ss) Slave
> > > > > > > >         SS1:1   (ocf::omneon:ss) Slave
> > > > > > > >         SS3:1   (ocf::omneon:ss) Slave
> > > > > > > >         SS5:1   (ocf::omneon:ss) Slave
> > > > > > > >         SS4:1   (ocf::omneon:ss) Slave
> > > > > > > >         SS7:1   (ocf::omneon:ss) Slave
> > > > > > > >         SS6:1   (ocf::omneon:ss) Slave
> > > > > > > >
> > > > > > > > And tried these steps to do a failback:
> > > > > > > >
> > > > > > > > # crm resource migrate ms-SS0 fc12-64-2
> > > > > > > > Error performing operation: ms-SS0 is already active on 
> > > > > > > > fc12-64-2
> > > > > > > >
> > > > > > > > ====> which makes sense since it is a "slave" on fc12-64-2
> > > > > > > >
> > > > > > > > # crm resource
> > > > > > > > crm(live)resource# promote ms-SS0:1
> > > > > > > > ERROR: ms-SS0:1 is not a master-slave resource 
> > > > > > > > crm(live)resource#
> > > > > > > > promote SS0:1 ERROR: SS0:1 is not a master-slave resource
> > > > > > > > crm(live)resource# promote SS0 ERROR: SS0 is not a master-slave
> > > > > > > > resource crm(live)resource# promote ms-SS0 Multiple attributes 
> > > > > > > > match
> > > > > > > > name=target-role Value: Started
> > > > > > > > (id=ms-SS0-meta_attributes-target-role) Value: Started
> > > > > > > > (id=SS0-meta_attributes-target-role) Error performing operation:
> > > > > > > > Required data for this CIB API call not found crm(live)resource#
> > > > > > > > demote ms-SS0 Multiple attributes match name=target-role Value:
> > > > > > > > Started
> > > > > > > > (id=ms-SS0-meta_attributes-target-role) Value: Started
> > > > > > > > (id=SS0-meta_attributes-target-role) Error performing operation:
> > > > > > > > Required data for this CIB API call not found crm(live)resource#
> > > > > > > >
> > > > > > > > Since I have a location constraint for ms-SS0 to fc12-64-1, I 
> > > > > > > > tried
> > > > > > > > the same operation using ms-SS1 which has a location constraint 
> > > > > > > > for
> > > > > > > > fc12-64-2.
> > > > > > > >
> > > > > > > > This gave me the same messages or:
> > > > > > > >
> > > > > > > > crm(live)resource# promote ms-SS1
> > > > > > > > Multiple attributes match name=target-role
> > > > > > > >   Value: Started        (id=ms-SS1-meta_attributes-target-role)
> > > > > > > >    Value: Started        (id=SS1-meta_attributes-target-role)
> > > > > > > > Error performing operation: Required data for this CIB API call 
> > > > > > > > not
> > > > > > > > found crm(live)resource# demote ms-SS1
> > > > > > > > Multiple attributes match name=target-role
> > > > > > > >   Value: Started        (id=ms-SS1-meta_attributes-target-role)
> > > > > > > >   Value: Started        (id=SS1-meta_attributes-target-role)
> > > > > > > > Error performing operation: Required data for this CIB API call 
> > > > > > > > not
> > > > > > > > found
> > > > > > > >
> > > > > > > > Could you tell me what I am doing wrong?
> > > > > > >
> > > > > > > There are multiple target-roles, i.e. for both the clone and the
> > > > > > > resource which is cloned. The tools can't decide which one to
> > > > > > > change. The crm shell will try to deal with that with the next
> > > > > > > Pacemaker release (I think that there's already an enhancement
> > > > > > > bugzilla open).  In the meantime, you can edit your configuration
> > > > > > > and remove the meta attributes from primitives.
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Dejan
> > > > > > >
> > > > > > > > My configuration file is attached below.
> > > > > > > >
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > >
> > > > > > > > Bob
> > > > > > > >
> > > > > > > > --------------------------- my configuration
> > > > > > > > ----------------------------------------------------------- node
> > > > > > > > $id="b69df3a6-a630-4edb-adf4-28727f8c1222" fc12-64-2
> > > > > > > > node $id="d7b30d08-d835-4014-b9c6-ebf53099cbe3" fc12-64-1
> > > > > > > > primitive SS0 ocf:omneon:ss \
> > > > > > > >         params ss_resource="SS0" \
> > > > > > > >         params ssconf="/tmp/config.0" \
> > > > > > > >         op monitor interval="59s" role="Master" timeout="30s" \
> > > > > > > >         op monitor interval="60s" role="Slave" timeout="28" \
> > > > > > > >         meta target-role="Started"
> > > > > > > > primitive SS1 ocf:omneon:ss \
> > > > > > > >         params ss_resource="SS1" \
> > > > > > > >         params ssconf="/tmp/config.1" \
> > > > > > > >         op monitor interval="59s" role="Master" timeout="30s" \
> > > > > > > >         op monitor interval="60s" role="Slave" timeout="28" \
> > > > > > > >         meta target-role="Started"
> > > > > > > > primitive SS2 ocf:omneon:ss \
> > > > > > > >         params ss_resource="SS2" \
> > > > > > > >         params ssconf="/tmp/config.2" \
> > > > > > > >         op monitor interval="59s" role="Master" timeout="30s" \
> > > > > > > >         op monitor interval="60s" role="Slave" timeout="28" \
> > > > > > > >         meta target-role="Started"
> > > > > > > > primitive SS3 ocf:omneon:ss \
> > > > > > > >         params ss_resource="SS3" \
> > > > > > > >         params ssconf="/tmp/config.3" \
> > > > > > > >         op monitor interval="59s" role="Master" timeout="30s" \
> > > > > > > >         op monitor interval="60s" role="Slave" timeout="28" \
> > > > > > > >         meta target-role="Started"
> > > > > > > > primitive SS4 ocf:omneon:ss \
> > > > > > > >         params ss_resource="SS4" \
> > > > > > > >         params ssconf="/tmp/config.4" \
> > > > > > > >         op monitor interval="59s" role="Master" timeout="30s" \
> > > > > > > >         op monitor interval="60s" role="Slave" timeout="28" \
> > > > > > > >         meta target-role="Started"
> > > > > > > > primitive SS5 ocf:omneon:ss \
> > > > > > > >         params ss_resource="SS5" \
> > > > > > > >         params ssconf="/tmp/config.5" \
> > > > > > > >         op monitor interval="59s" role="Master" timeout="30s" \
> > > > > > > >         op monitor interval="60s" role="Slave" timeout="28" \
> > > > > > > >         meta target-role="Started"
> > > > > > > > primitive SS6 ocf:omneon:ss \
> > > > > > > >         params ss_resource="SS6" \
> > > > > > > >         params ssconf="/tmp/config.6" \
> > > > > > > >         op monitor interval="59s" role="Master" timeout="30s" \
> > > > > > > >         op monitor interval="60s" role="Slave" timeout="28" \
> > > > > > > >         meta target-role="Started"
> > > > > > > > primitive SS7 ocf:omneon:ss \
> > > > > > > >         params ss_resource="SS7" \
> > > > > > > >         params ssconf="/tmp/config.7" \
> > > > > > > >         op monitor interval="59s" role="Master" timeout="30s" \
> > > > > > > >         op monitor interval="60s" role="Slave" timeout="28" \
> > > > > > > >         meta target-role="Started"
> > > > > > > > ms ms-SS0 SS0 \
> > > > > > > >         meta clone-max="2" notify="true" 
> > > > > > > > globaally-unique="false"
> > > > > > > > target-role="Started" ms ms-SS1 SS1 \
> > > > > > > >         meta clone-max="2" notify="true" 
> > > > > > > > globaally-unique="false"
> > > > > > > > target-role="Started" ms ms-SS2 SS2 \
> > > > > > > >         meta clone-max="2" notify="true" 
> > > > > > > > globaally-unique="false"
> > > > > > > > target-role="Started" ms ms-SS3 SS3 \
> > > > > > > >         meta clone-max="2" notify="true" 
> > > > > > > > globaally-unique="false"
> > > > > > > > target-role="Started" ms ms-SS4 SS4 \
> > > > > > > >         meta clone-max="2" notify="true" 
> > > > > > > > globaally-unique="false"
> > > > > > > > target-role="Started" ms ms-SS5 SS5 \
> > > > > > > >         meta clone-max="2" notify="true" 
> > > > > > > > globaally-unique="false"
> > > > > > > > target-role="Started" ms ms-SS6 SS6 \
> > > > > > > >         meta clone-max="2" notify="true" 
> > > > > > > > globaally-unique="false"
> > > > > > > > target-role="Started" ms ms-SS7 SS7 \
> > > > > > > >         meta clone-max="2" notify="true" 
> > > > > > > > globaally-unique="false"
> > > > > > > > target-role="Started" location ms-SS0-master-w1 ms-SS0 \
> > > > > > > >         rule $id="ms-SS0-master-w1-rule" $role="master" 100: 
> > > > > > > > #uname
> > > > > > > > eq fc12-64-1 location ms-SS1-master-w1 ms-SS1 \
> > > > > > > >         rule $id="ms-SS1-master-w1-rule" $role="master" 100: 
> > > > > > > > #uname
> > > > > > > > eq fc12-64-2 location ms-SS2-master-w1 ms-SS2 \
> > > > > > > >         rule $id="ms-SS2-master-w1-rule" $role="master" 100: 
> > > > > > > > #uname
> > > > > > > > eq fc12-64-1 location ms-SS3-master-w1 ms-SS3 \
> > > > > > > >         rule $id="ms-SS3-master-w1-rule" $role="master" 100: 
> > > > > > > > #uname
> > > > > > > > eq fc12-64-2 location ms-SS4-master-w1 ms-SS4 \
> > > > > > > >         rule $id="ms-SS4-master-w1-rule" $role="master" 100: 
> > > > > > > > #uname
> > > > > > > > eq fc12-64-1 location ms-SS5-master-w1 ms-SS5 \
> > > > > > > >         rule $id="ms-SS5-master-w1-rule" $role="master" 100: 
> > > > > > > > #uname
> > > > > > > > eq fc12-64-2 location ms-SS6-master-w1 ms-SS6 \
> > > > > > > >         rule $id="ms-SS6-master-w1-rule" $role="master" 100: 
> > > > > > > > #uname
> > > > > > > > eq fc12-64-1 location ms-SS7-master-w1 ms-SS7 \
> > > > > > > >         rule $id="ms-SS7-master-w1-rule" $role="master" 100: 
> > > > > > > > #uname
> > > > > > > > eq fc12-64-2 property $id="cib-bootstrap-options" \
> > > > > > > >         
> > > > > > > > dc-version="1.0.5-ee19d8e83c2a5d45988f1cee36d334a631d84fc7" \
> > > > > > > >         cluster-infrastructure="Heartbeat" \
> > > > > > > >         stonith-enabled="false" \
> > > > > > > >         symmetric-cluster="true"
> > > > > >
> > > > > > _______________________________________________
> > > > > > Linux-HA mailing list
> > > > > > [email protected]
> > > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > > > See also: http://linux-ha.org/ReportingProblems
> > > > > 
> > > > > _______________________________________________
> > > > > Linux-HA mailing list
> > > > > [email protected]
> > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > > See also: http://linux-ha.org/ReportingProblems
> > > > > 
> > > > 
> > > > -- 
> > > > Best regards,
> > > > Marian Marinov
> > > 
> > > 
> > > 
> > > > _______________________________________________
> > > > Linux-HA mailing list
> > > > [email protected]
> > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > See also: http://linux-ha.org/ReportingProblems
> > > _______________________________________________
> > > Linux-HA mailing list
> > > [email protected]
> > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > See also: http://linux-ha.org/ReportingProblems
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> > 
> > 
> > 
> >      
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
> 
> 
> 
>       


> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to