Hello,
I just realized why is this happening. I have a Opt-out cluster.
Switched it to Opt-in cluster and added some rules. Seems to work fine now ...
Thank you.
--
Maxim Ianoglo
On Wed, 18 May 2011 08:03:08 +0300
Maxim Ianoglo dot...@gmail.com wrote:
Hello,
I have: Node_A, Node_B, Node_C
Have A MySQL Resource that should run on Node_A ( but with VIP ) and Node_B
Node_C runs some other resources but this is not important now.
on-fail monitor option for resource is standby
I have stopped MySQL on Node_A so, MySQL from Node_A and VIP went to Node_B.
Restarted Heartbeat on Node_A and moved MySQL and VIP back to Node_B and
MySQL from Node_A Moved to Node_C even if I have location constraint for this
resource that says to keep this service on Node_A.
Resource definitions:
group gr_mysql_master_primary VIP_10.10.1.235 mysql_master_primary
notif_gr_mysql_master_primary \
meta ordered=true collocated=true is-managed=true
target-role=Started
group gr_mysql_master_secondary mysql_master_secondary
notif_gr_mysql_master_secondary \
meta ordered=true collocated=true is-managed=true
target-role=Started
Location Constraints:
location loc_gr_mysql_master_primary_default gr_mysql_master_primary rule
100: #uname eq Node_B
location loc_gr_mysql_master_primary_failover gr_mysql_master_primary rule
50: #uname eq Node_A
location loc_gr_mysql_master_secondary gr_mysql_master_secondary rule inf:
#uname eq Node_A
Is this normal behavior ?
Thank you.
--
Maxim Ianoglo
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems