On Aug 8, 2007, at 12:59 PM, Klemens Kittan wrote:
Am Monday, 6. August 2007 16:36 schrieb Andrew Beekhof:
...
critical, we tried to add another monitor operation with
role="Slave" but
then none of the nodes was promoted as master initially...
with the same interval?
try adding:
<op id="drbd0_mon_11" name="monitor" interval="11s" timeout="5s"/>
tried that and it works! Thank you!
no. stickiness only controls where resources are run, not what state
they're in the drbd agent should be setting the correct master
preference
using crm_master...
Is there some documentation on crm_master and how it is used and
configured?
crm_master is basically an alias for crm_attribute that automagically
populates a number of fields (ie. the attribute name, the scope and
the current host)
about all the RA needs to do is specify the preference
its designed to be used exclusively by the RA (not by humans)
We have the feeling to not quite understand how that works
and still do not know how to achieve our goal:
- drbd on both nodes, one (initially preferably odin) in Master state
adding a rsc_location constraint with role=Master would achieve this
- in case of failures on the Master node, promote the other node to
become
master
- in case of failures on Slave node, let heartbeat know that
something is
wrong (that works with the two monitors now), but do nothing else
- if a failed node comes back (and the other node is running ok and
has state
Master), the returning node should not become Master
thats up to the RA to decide - if the RA gives us the right values, we
can do this
So, we wan't the location preference to be applied only at heartbeat
startup.
Thats unlikely to be explicitly supported.
We did some tests:
- if we drop the rule for the Master preference on odin, the non-
autofailback
behaviour works fine. This preference isn't that important, but we
wan't to
add more resources and dependencies later and feel that if we can't
do this
relatively simple thing, we'll get much more problems later.
- We than tried small values for the master preference rule (50,
then 10) and
had auto-failback again.
We monitored the score values of the resources using
/usr/local/sbin/ptest -L -VVVVVV 2>&1 | grep assign_node
and made this observations:
- the values which occur here (76, 11, 6,...) seem not to come from
our
cib.xml !
they're a combination of stickiness and RA preferences set with
crm_master
- the default-resource-stickiness value of INFINTIY that we have in
our
cib.xml never makes it to the score values for the drbd resources!
The latter strengthens this conjecture (?):
The observed behaviour suggests that either the
default_resource_stickiness does not apply to a multistate
resource or
that it does only distinguish between Stopped and Started, not
between
Master/Slave.
Thanks for your help!
Klemens
--
Klemens Kittan
Systemadministrator
Uni-Potsdam, Inst. f. Informatik
August-Bebel-Str. 89
14482 Potsdam
Tel. : +49-331-977/3125
Fax. : +49-331-977/3122
eMail : [EMAIL PROTECTED]
gpg --recv-keys --keyserver wwwkeys.de.pgp.net 6EA09333
<cibadmin.txt>_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems