On Thu, Aug 4, 2011 at 4:28 PM, Ulrich Windl
<ulrich.wi...@rz.uni-regensburg.de> wrote:
>>>> Dan Frincu <df.clus...@gmail.com> schrieb am 03.08.2011 um 13:28 in
> Nachricht
> <CADQRkwiFCEUnq-i9Dtv6AbjQz4Z_e792=3is81zv1eqdrnj...@mail.gmail.com>:
>> Hi,
>>
>> On Wed, Aug 3, 2011 at 2:22 PM,  <alain.mou...@bull.net> wrote:
>> > Hi & Thanks
>> >
>> > I don't think the 1000 or 5000 value makes any difference,
>>
>> The values make little difference, it's about having a higher score atm.
>
> Hi!
>
> Isn't the stickyness effectively based on the failcount?

No.  A long long time ago there was failure-stickiness, but that was a
bad idea that we got rid of.

> We have one resource
> that has a location constraint for one node with a weight of 500000 and a
> sticky ness of 100000. The resource runs on a different node and shows no
> tendency of moving back (not even after restarts).
>
> Somehow the implementation of stickyness is not what one might expect. I'd
> expect stickyness to be related to RUNNING resources.

It is.

> I see little sense in
> keeping a resource on a node after restarting.
>
> Why? Usually you want stickyness to prevent unexpected downtimes or negative
> side effects caused by migration (e.g. all users loosing their connections).
> But as a restart will have these side-effects anyway, I see little sense with
> the current implementation.
>
> Regards,
> Ulrich
>
>>
>> > so the rsc_options could make it work ?
>>
>> Yes, I believe so.
>>
>> > But do you have also the order with a clone ?
>>
>> No.
>>
>> > Because on other of my configurations, I have also
>> > property $id="cib-bootstrap-options" \
>> >        default-resource-stickiness="5000"
>> > and the resource does not failback automatically ... so ...
>> > Could somebody explain ?
>>
>> Try the following:
>> crm_verify -LVVVV 2>&1 | grep stick
>>
>> And see what scores (weights) are given to resources. Based on these
>> weights it might make more sense.
>>
>> HTH,
>> Dan
>>
>> > Thanks
>> > Alain
>> >
>> >
>> >
>> > De :    Dan Frincu <df.clus...@gmail.com>
>> > A :     General Linux-HA mailing list <linux-ha@lists.linux-ha.org>
>> > Date :  03/08/2011 13:00
>> > Objet : Re: [Linux-HA] location and orders : Question about a behavior
> ...
>> > Envoyé par :    linux-ha-boun...@lists.linux-ha.org
>> >
>> >
>> >
>> > Hi,
>> >
>> > On Tue, Aug 2, 2011 at 6:06 PM,  <alain.mou...@bull.net> wrote:
>> >> Hi
>> >>
>> >> I have this simple configuration of locations and orders between
>> > resources
>> >> group-1 , group-2 and clone-1
>> >> (on a two nodes ha cluster with Pacemaker-1.1.2-7 /corosync-1.2.3-21) :
>> >>
>> >> location loc1-group-1   group-1 +100: node2
>> >> location loc1-group-2   group-2 +100: node3
>> >>
>> >> order order-group-1   inf: group-1   clone-1
>> >> order order-group-2   inf: group-2   clone-1
>> >>
>> >> property $id="cib-bootstrap-options" \
>> >>        dc-version="1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe" \
>> >>        cluster-infrastructure="openais" \
>> >>        expected-quorum-votes="2" \
>> >>        stonith-enabled="true" \
>> >>        no-quorum-policy="ignore" \
>> >>        default-resource-stickiness="5000" \
>> >
>> > I use it as:
>> > rsc_defaults $id="rsc-options" \
>> >        resource-stickiness="1000"
>> > Instead of:
>> > property $id="cib-bootstrap-options" \
>> >        default-resource-stickiness="5000"
>> > And the behavior is the expected one, no failback.
>> >
>> > HTH,
>> > Dan
>> >
>> >>
>> >> (and no current cli- preferences)
>> >>
>> >> When I stop the node2, the group-1 is well migrated on node3
>> >> But when node2 is up again, and that I start Pacemaker again on node2,
>> >> the group-1 automatically comes back on node2 , and I wonder why ?
>> >>
>> >> I have other similar configuration with same location constraints and
>> > same
>> >> default-resource-stickiness value, but without order with a clone
>> >> resource,
>> >> and the group does not come back automatically. But I don't understand
>> > why
>> >> this order constraint would change this behavior ...
>> >>
>> >> Thanks for your help
>> >> Alain Moullé
>> >>
>> >> _______________________________________________
>> >> Linux-HA mailing list
>> >> Linux-HA@lists.linux-ha.org
>> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> >> See also: http://linux-ha.org/ReportingProblems
>> >>
>> >
>> >
>> >
>> > --
>> > Dan Frincu
>> > CCNA, RHCE
>> > _______________________________________________
>> > Linux-HA mailing list
>> > Linux-HA@lists.linux-ha.org
>> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> > See also: http://linux-ha.org/ReportingProblems
>> > _______________________________________________
>> > Linux-HA mailing list
>> > Linux-HA@lists.linux-ha.org
>> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> > See also: http://linux-ha.org/ReportingProblems
>> >
>>
>>
>>
>> --
>> Dan Frincu
>> CCNA, RHCE
>> _______________________________________________
>> Linux-HA mailing list
>> Linux-HA@lists.linux-ha.org
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>
>
>
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to