I have stickiness working on my two node setup. I tried finding a difference in our setups but couldn't find any. I have attached my "crm configure show" Aside from the stickiness level, I believe you can set a preferred location for that resource. Your preferred location settings might be overriding your stickiness setting.
I don't know if you are aware of the Linux Cluster Management Console (LCMC). I have found that it isn't a great tool for editing a cluster, but it is great about giving you an overview of your cluster. I would suggest installing that and then adding your servers to it. It will autodetect how everything is set up. You can go through the resources to make sure Host Location is blank. If it is set to Always for hasb1, then that may be the issue. William -----Original Message----- From: linux-ha-boun...@lists.linux-ha.org [mailto:linux-ha-boun...@lists.linux-ha.org] On Behalf Of Kevin F. La Barre Sent: Wednesday, October 10, 2012 5:44 PM To: General Linux-HA mailing list Subject: [Linux-HA] Stickiness confusion I'm testing stickiness in a sandbox that consists of 3 nodes. The configuration is very simple but it's not acting the way I think it should. My configuration: # crm configure show node hasb1 node hasb2 node hasb3 primitive postfix lsb:postfix \ op monitor interval="15s" property $id="cib-bootstrap-options" \ dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \ cluster-infrastructure="openais" \ expected-quorum-votes="3" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1349902760" \ maintenance-mode="false" \ is-managed-default="true" rsc_defaults $id="rsc-options" \ resource-stickiness="100" The test resource "postfix" lives on hasb1. # crm_simulate -sL Current cluster status: Online: [ hasb1 hasb3 hasb2 ] postfix (lsb:postfix): Started hasb1 Allocation scores: native_color: postfix allocation score on hasb1: 100 native_color: postfix allocation score on hasb2: 0 native_color: postfix allocation score on hasb3: 0 On hasb1 I'll kill the corosync process. Resource moves over to hasb2 as expected. # crm status ============ Last updated: Wed Oct 10 22:35:23 2012 Last change: Wed Oct 10 21:30:12 2012 via crm_resource on hasb2 Stack: openais Current DC: hasb2 - partition with quorum Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14 3 Nodes configured, 3 expected votes 1 Resources configured. ============ Online: [ hasb3 hasb2 ] OFFLINE: [ hasb1 ] postfix (lsb:postfix): Started hasb2 # crm_simulate -sL Current cluster status: Online: [ hasb3 hasb2 ] OFFLINE: [ hasb1 ] postfix (lsb:postfix): Started hasb2 Allocation scores: native_color: postfix allocation score on hasb1: 0 native_color: postfix allocation score on hasb2: 100 native_color: postfix allocation score on hasb3: 0 Now I'll start corosync & pacemaker. Postfix resource moves back to hasb1 even though we have default stickiness. # crm status ============ Last updated: Wed Oct 10 22:37:00 2012 Last change: Wed Oct 10 21:30:12 2012 via crm_resource on hasb2 Stack: openais Current DC: hasb2 - partition with quorum Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14 3 Nodes configured, 3 expected votes 1 Resources configured. ============ Online: [ hasb1 hasb3 hasb2 ] postfix (lsb:postfix): Started hasb1 # crm_simulate -sL Current cluster status: Online: [ hasb1 hasb3 hasb2 ] postfix (lsb:postfix): Started hasb1 Allocation scores: native_color: postfix allocation score on hasb1: 100 native_color: postfix allocation score on hasb2: 0 native_color: postfix allocation score on hasb3: 0 What am I missing? I'm pulling my hair - any help would be appreciated greatly. Corosync 1.4.1 Pacemaker 1.1.7 CentOS 6.2 -Kevin _______________________________________________ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems _______________________________________________ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems