Hello, Did you receive my previous email responding your question about cibadmin -Ql command output??
Many thanks, Dimos. On 3 Φεβ 2012, at 4:50 π.μ., Andrew Beekhof <[email protected]> wrote: > The latest code appears to behave ok, so perhaps the problem was since fixed. > Can you send me the output from cibadmin -Ql when the cluster is in > this state so I can confirm? > > On Mon, Jan 30, 2012 at 11:38 PM, agutxi Agustin <[email protected]> wrote: >> Hi guys, >> I'm trying to setup some anticolocation rules, but I'm finding some >> strange behaviour and not getting the desired effect, so I wonder if >> I'm missing something or there is really some problem with my >> setting.If you could lend me a hand that would be great. >> >> The scenario: 3 Dummy resources running based on utilization (1 core >> for each resource running) on 2 nodes, each with 2 cores capacity. >> Plus: Anticolocation rules: No 2 resources can run in the same node (I >> know in this case I could limit this with utilization, but this is >> just a test case from a bigger scenario where I detected the problem) >> Configuration: >> _______________________________________________________________________________ >> crm(live)# configure show >> node vmHost1 \ >> utilization cores="2" >> node vmHost2 \ >> utilization cores="2" >> primitive DummyVM1 ocf:pacemaker:Dummy \ >> op monitor interval="60s" timeout="60s" \ >> op start on-fail="restart" interval="0" \ >> op stop on-fail="ignore" interval="0" \ >> utilization cores="1" \ >> meta is-managed="true" migration-threshold="2" target-role="Started" >> primitive DummyVM2 ocf:pacemaker:Dummy \ >> op monitor interval="60s" timeout="60s" \ >> op start on-fail="restart" interval="0" \ >> op stop on-fail="ignore" interval="0" \ >> utilization cores="1" \ >> meta is-managed="true" migration-threshold="2" target-role="Started" >> primitive DummyVM3 ocf:pacemaker:Dummy \ >> op monitor interval="60s" timeout="60s" \ >> op start on-fail="restart" interval="0" \ >> op stop on-fail="ignore" interval="0" \ >> utilization cores="1" \ >> meta is-managed="true" migration-threshold="2" target-role="Stopped" >> colocation antidummy12 -INF: DummyVM1 DummyVM2 >> colocation antidummy13 -INF: DummyVM1 DummyVM3 >> colocation antidummy23 -INF: DummyVM2 DummyVM3 >> property $id="cib-bootstrap-options" \ >> dc-version="1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c" \ >> cluster-infrastructure="openais" \ >> expected-quorum-votes="2" \ >> stonith-enabled="false" \ >> stop-all-resources="false" \ >> placement-strategy="utilization" \ >> no-quorum-policy="ignore" \ >> cluster-infrastructure="openais" \ >> stop-orphan-resources="true" \ >> stop-orphan-actions="true" \ >> symmetric-cluster="true" \ >> last-lrm-refresh="1326975274" >> rsc_defaults $id="rsc-options" \ >> resource-stickiness="INFINITY" >> _______________________________________________________________________________ >> >> Looking around for symmetric anti-collocation information, I found a >> message where Andrew Beekhof stated: >> >>>>>> colocation X-Y -2: X Y >>>>>> colocation Y-X -2: Y X >>>>>> >>>>> the second one is implied by the first and is therefore redundant >>>>> >>>> If only that were true! >>>> >>> >>> It is. I know exactly how my code works in this regard. >>> More than likely a score of -2 is simply too low to have any effect. >> >> so I was expecting my resources to prevent another resource from >> running on the same node. >> >> Test: I start 2 resources: DummyVM1 & DUmmyVM2: they correctly start >> on vmHost1 and vmHost2, as expected (I don't care about location) >> _______________________________________________________________________________ >> crm(live)# status >> ============ >> Last updated: Mon Jan 30 13:33:19 2012 >> Last change: Mon Jan 30 13:30:21 2012 via cibadmin on vmHost1 >> Current DC: vmHost2 - partition with quorum >> Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c >> 2 Nodes configured, 2 expected votes >> 3 Resources configured. >> ============ >> >> Online: [ vmHost1 vmHost2 ] >> >> DummyVM1 (ocf::pacemaker:Dummy): Started vmHost1 >> DummyVM2 (ocf::pacemaker:Dummy): Started vmHost2 >> _______________________________________________________________________________ >> >> Then, I start the DummyVM3 resource: >> >> _______________________________________________________________________________ >> crm(live)# resource start DummyVM3 >> crm(live)# status >> ============ >> Last updated: Mon Jan 30 13:33:52 2012 >> Last change: Mon Jan 30 13:33:50 2012 via cibadmin on vmHost1 >> Current DC: vmHost2 - partition with quorum >> Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c >> 2 Nodes configured, 2 expected votes >> 3 Resources configured. >> ============ >> >> Online: [ vmHost1 vmHost2 ] >> >> DummyVM1 (ocf::pacemaker:Dummy): Started vmHost1 >> DummyVM2 (ocf::pacemaker:Dummy): Started vmHost2 >> DummyVM3 (ocf::pacemaker:Dummy): Started vmHost1 >> _______________________________________________________________________________ >> >> and immediately DummyVM3 is started on vmHost1, though from my >> understanding it shouldnt (anticolocation with -INF score). >> I think the colocation scores are being ignored, is this possible?. >> I checked with "ptest -saL" and it is not showing -INFINITY for my >> colocation rules: >> >> root@vmHost1:~# ptest -saL >> Allocation scores: >> native_color: DummyVM3 allocation score on vmHost1: INFINITY >> native_color: DummyVM3 allocation score on vmHost2: 0 >> native_color: DummyVM2 allocation score on vmHost1: 0 >> native_color: DummyVM2 allocation score on vmHost2: INFINITY >> native_color: DummyVM1 allocation score on vmHost1: INFINITY >> native_color: DummyVM1 allocation score on vmHost2: 0 >> >> Can someone give me any hints as for what I am doing wrong? >> Thank you guys, >> >> Agustín >> >> "Death: Human beings make life so interesting. Do you know, that in a >> universe so full of wonders, they have managed to invent boredom." -- >> Terry Pratchett >> >> _______________________________________________ >> Pacemaker mailing list: [email protected] >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: http://bugs.clusterlabs.org > > _______________________________________________ > Pacemaker mailing list: [email protected] > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org _______________________________________________ Pacemaker mailing list: [email protected] http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
