Here's your reason:

Aug  5 12:31:35 vm-ha-9 pengine: [13280]: ERROR: native_add_running:
Resource lsb::nfslock:resource_nfslock appears to be active on 2
nodes.
Aug  5 12:31:35 vm-ha-9 pengine: [13280]: ERROR: See
http://linux-ha.org/v2/faq/resource_too_active for more information.
Aug  5 12:31:35 vm-ha-9 pengine: [13280]: ERROR: native_add_running:
Resource lsb::nfs:resource_nfs appears to be active on 2 nodes.
Aug  5 12:31:35 vm-ha-9 pengine: [13280]: ERROR: See
http://linux-ha.org/v2/faq/resource_too_active for more information.

[snip]

Aug  5 12:31:35 vm-ha-9 pengine: [13280]: notice: native_print:
resource_nfslock        (lsb:nfslock)
Aug  5 12:31:35 vm-ha-9 pengine: [13280]: notice: native_print:         0 :
vm-ha-9.mydomain.com
Aug  5 12:31:35 vm-ha-9 pengine: [13280]: notice: native_print:         1 :
vm-ha-11.mydomain.com
Aug  5 12:31:35 vm-ha-9 pengine: [13280]: notice: native_print:
resource_nfs    (lsb:nfs)
Aug  5 12:31:35 vm-ha-9 pengine: [13280]: notice: native_print:         0 :
vm-ha-9.mydomain.com
Aug  5 12:31:35 vm-ha-9 pengine: [13280]: notice: native_print:         1 :
vm-ha-11.mydomain.com


Either you're starting those two resources at boot time, or the RA is busted.

On Tue, Aug 5, 2008 at 19:46, Randy Evans <[EMAIL PROTECTED]> wrote:
> On Mon, Aug 4, 2008 at 6:55 AM, Andrew Beekhof <[EMAIL PROTECTED]> wrote:
>
>> If you're using stickiness=INFINITY, then this really shouldn't happen.
>> Unless you've also set the resource's node preference to INFINITY as well.
>>
>> Can you include some logs and configuration details?
>> _______________________________________________
>
>
> The cib file "cib_no_stop-start.xml" is from a 3-node cluster that
> does not restart the resources when a node other than the one hosting
> the resources is stopped then started ("service heartbeat stop", then
> "service heartbeat start")
>
> The cib file "cib_does_stop-start.xml" is from a 3-node cluster that
> does restart the resources when a node other than the one hosting the
> resources is stopped then started.
>
> Both clusters are running a single resource group (different services though).
> the groups from both clusters  have resource_stickiness=INFINITY and
> resource_failure_stickiness=10000
>
>
> The cluster that "does_stop-start" has a Location set to prevent it
> from running on one of the nodes.  this cluster is running DRBD and
> can only switch between two nodes (Primary/Secondary) so I suppose
> that is what is causing it.
>
>
> The "messages" file is from /var/log/messages on the machine that was
> currently hosting the resources that stopped then started.
>
>
> I had originally thought both clusters exhibited this behavior but I
> was mistaken.
>
> Thanks
> Randy
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to