The question in brief:

Is it possible to configure heartbeat such that a managed resource
(an IPaddr) is affected by the status of an unmanaged resource?

In theory I could possibly set up a process outside of heartbeat
that periodically checks the status of the service in question,
and updates an attribute in the CIB. I can then add a constraint
which makes it "highly undesirable" for the IPaddr to be placed on
a node where the service in question is broken. Is this the best
way of going about this?


The question in full:

I've got a working squid + IPaddr setup with heartbeat 2.1.3 in
a two node cluster.

There are IP addresses bound to the loopback interface on both nodes
which squid listens on, and which the firewall forwards connections
to. The firewall routes the subnet these addresses live on to the IP
address managed by heartbeat, so whichever node has the magic IP is
the currently active proxy.

With the current setup, squid is managed by heartbeat, which means
it stops it running on the inactive node. This results in a delay of
a few seconds if the currently active node changes, as heartbeat
first moves the IP address, then stops squid on the old host, and
starts it on the new one.

I would like to avoid this delay, as it's not actually necessary for
squid to be stopped on the inactive node. However, I do want
heartbeat to monitor whether squid is running on a node so it can
avoid moving the IP address to a node which isn't actually running
squid. Squid itself has crash recovery, so it's unlikely heartbeat
will be able to get it running again just by running the startup
script for it.

I've looked at the "is_managed" attribute, but it appears setting a
resource to unmanaged prevents any actions, including monitor, from
being performed on it. Actually, it seems to check the status at
startup, but never again.

Is there any way of configuring HA so that it will periodically
check the status of squid on all nodes (regardless of whether the
node is currently holding the IPaddr resource), so that I can use
squid's status as an input to resource constraints?

Or, am I completely insane to want to do this in the first place?
My reasoning is that squid will be running on both nodes at all
times, unless there's something horribly wrong or the administrator
has decided to stop it for some reason. Therefore, it will probably
just create confusion if heartbeat takes it upon itself to start and
stop squid. In addition, I would prefer to minimise downtime, and
since there's no harm in having squid running on an inactive node
this seems an easy way to shave a few seconds off of the failover
delay.
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to