Hi people:

I'm a newbie in heartbeat and I don't speak English very well yet.

I'm getting started with two virtual machines (vmware) running Centos
5.2 and heartbeat 2.1.3. I'm working on heartbeat 1.x style without
crm and stonith configuration, and managing httpd and squid services.
I don't know why httpd and squid are starting twice when one of my
nodes start to acquire resources from the other node.

My ha.cf, authkeys and haresources are identical on both nodes.
This is my ha.cf file:

keepalive 2
deadtime 64
warntime 16
initdead 124
udpport 694
auto_failback off
node ha1.cdoctor.com.pe ha2.cdoctor.com.pe
ping 192.168.99.1 172.16.99.1
respawn hacluster /usr/lib/heartbeat/ipfail
crm off

This is my haresources file:
ha1.cdoctor.com.pe IPaddr::172.16.99.100/24/eth0 httpd squid

Both nodes have these network configuration:

* node ha2.cdoctor.com.pe
eth0: 172.16.99.3
eth1: 10.1.1.2

* node ha1.cdoctor.com.pe
eth0: 172.16.99.2
eth1: 10.1.1.1

This is part of my ha-log file:
http://pastebin.com/m3010f8b7

I can't understand why httpd and squid are starting again after
they're already started previously. Is there something wrong with my
configuration? This problems is causing hearbeat to give up due to
failure of squid.

If I run "/etc/init.d/httpd start" when httpd is already running then
I get an exit code 0, but if I run "/etc/init.d/squid start" when
squid is already running then I get an exit code 1... so heartbeat
starts having problems.

Thanks :(
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to