Oh I see thats the problem then. My resource tomcat21-node1 and
tomcat21-node2 are exactly the same, I have configured the same resource
twice to have it started once on node 1 and once on node 2. The monitor
status urls are also the same for both.
<instance_attributes id="tomcat1-node1_attr">
<nvpair id="tomcat_1_node1-java_home" name="java_home"
value="/opt/j2sdk1.4.2_03_tomcat3"/>
<nvpair id="tomcat_1_node1-catalina_home" name="catalina_home"
value="/opt/jakarta/tomcat-3"/>
<nvpair id="tomcat_1_node1-statusurl" name="statusurl"
value="http://localhost:8083/studies/crf/test_1294_we"/>
<nvpair id="tomcat_1_node1-catalina_pid" name="catalina_pid"
value="/opt/jakarta/tomcat-3/logs/catalina.pid"/>
<nvpair id="tomcat_1_node1-startup-log" name="script_log"
value="/var/log/www/jakarta/tomcat3/startup.log"/>
<nvpair id="tomcat_1_node1-name" name="tomcat_name"
value="tomcat1"/>
<nvpair id="tomcat_1_node1-regex" name="testregex"
value=".*SERVLET OK.*DATABASE OK"/>
</instance_attributes>
and
<instance_attributes id="tomcat1-node2_attr">
<nvpair id="tomcat_1_node2-java_home" name="java_home"
value="/opt/j2sdk1.4.2_03_tomcat3"/>
<nvpair id="tomcat_1_node2-catalina_home" name="catalina_home"
value="/opt/jakarta/tomcat-3"/>
<nvpair id="tomcat_1_node2-statusurl" name="statusurl"
value="http://localhost:8083/studies/crf/test_1294_we"/>
<nvpair id="tomcat_1_node2-catalina_pid" name="catalina_pid"
value="/opt/jakarta/tomcat-3/logs/catalina.pid"/>
<nvpair id="tomcat_1_node2-startup-log" name="script_log"
value="/var/log/www/jakarta/tomcat3/startup.log"/>
<nvpair id="tomcat_1_node2-name" name="tomcat_name"
value="tomcat1"/>
<nvpair id="tomcat_1_node2-regex" name="testregex"
value=".*SERVLET OK.*DATABASE OK"/>
</instance_attributes>
If I only make the tomcat_name unique this wont help right? But maybe I
could modify the monitor to it checks wheter uname -n is the same for
tomcat_name (tomcat1-hostname) and then return running or not?
Thanks
-----Ursprüngliche Nachricht-----
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Auftrag von Andrew
Beekhof
Gesendet: Freitag, 7. November 2008 08:10
An: General Linux-HA mailing list
Betreff: Re: [Linux-HA] Problem with Opt-InCluster after upgrade to
2.99.2 +Pacemaker 1.0
We didn't start them anywhere.
When the cluster starts, it goes looking for any resources that were
already active:
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
IPaddr_monitor_0 found active IPaddr on www2test
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
tomcat21-node1_monitor_0 found active tomcat21-node1 on www2test
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
tomcat21-node2_monitor_0 found active tomcat21-node2 on www2test
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
apache2_monitor_0 found active apache2 on www2test
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
tomcat1-node1_monitor_0 found active tomcat1-node1 on www2test
pengine[12646]: 2008/11/06_16:44:01 WARN: unpack_rsc_op:
tomcat1-node2_monitor_0 found active tomcat1-node2 on www2test
It found lots... were they started at boot time by the OS?
On Thu, Nov 6, 2008 at 16:46, Ehlers, Kolja <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> since the upgrade to the new Version my Cluster is not workign as
expected.
> I have set symmetric-cluster to false and have these location rules:
>
> <rsc_location id="loc-2" rsc="tomcat1-node1" node="www1test"
> score="INFINITY"/>
> <rsc_location id="loc-3" rsc="tomcat1-node1" node="www2test"
> score="-INFINITY"/>
> <rsc_location id="loc-4" rsc="tomcat1-node2" node="www2test"
> score="INFINITY"/>
> <rsc_location id="loc-5" rsc="tomcat1-node2" node="www1test"
> score="-INFINITY"/>
> <rsc_location id="loc-6" rsc="tomcat21-node1" node="www1test"
> score="INFINITY"/>
> <rsc_location id="loc-7" rsc="tomcat21-node1" node="www2test"
> score="-INFINITY"/>
> <rsc_location id="loc-8" rsc="tomcat21-node2" node="www2test"
> score="INFINITY"/>
> <rsc_location id="loc-9" rsc="tomcat21-node2" node="www1test"
> score="-INFINITY"/>
>
> So I have:
>
> tomcat1-node1 (ocf::cr:tomcat1): Started www1test
> tomcat21-node1 (ocf::cr:tomcat): Started www1test
> tomcat1-node2 (ocf::cr:tomcat1): Started www2test
> tomcat21-node2 (ocf::cr:tomcat): Started www2test
>
> 1. If I stop one node everything is fine and I have:
>
> tomcat1-node2 (ocf::cr:tomcat1): Started www2test
> tomcat21-node2 (ocf::cr:tomcat): Started www2test
>
> 2. But if I bring node 1 back up weird thing happen. All resources are
> started on node2 now.
>
> tomcat1-node1 (ocf::cr:tomcat1): Started www2test
> tomcat21-node1 (ocf::cr:tomcat): Started www2test
> tomcat1-node2 (ocf::cr:tomcat1): Started www2test
> tomcat21-node2 (ocf::cr:tomcat): Started www2test
>
> 3. And then the monitors do fail
>
> tomcat1-node1 (ocf::cr:tomcat1): Started www1test
> tomcat1-node2 (ocf::cr:tomcat1): Started www2test FAILED
> tomcat21-node2 (ocf::cr:tomcat): Started www2test FAILED
>
> Failed actions:
> tomcat21-node2_monitor_5000 (node=www2test, call=192, rc=7): complete
> tomcat1-node2_monitor_5000 (node=www2test, call=191, rc=7): complete
>
> After that it all comes back to normal but its unacceptable for the
> resources to be restarted on the untouched node. I have attached the log
of
> what happens after 2.
>
> Thanks
>
>
> Geschäftsführung: Dr. Michael Fischer, Reinhard Eisebitt
> Amtsgericht Köln HRB 32356
> Steuer-Nr.: 217/5717/0536
> Ust.Id.-Nr.: DE 204051920
> --
> This email transmission and any documents, files or previous email
> messages attached to it may contain information that is confidential or
> legally privileged. If you are not the intended recipient or a person
> responsible for delivering this transmission to the intended recipient,
> you are hereby notified that any disclosure, copying, printing,
> distribution or use of this transmission is strictly prohibited. If you
> have received this transmission in error, please immediately notify the
> sender by telephone or return email and delete the original transmission
> and its attachments without reading or saving in any manner.
>
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
Geschäftsführung: Dr. Michael Fischer, Reinhard Eisebitt
Amtsgericht Köln HRB 32356
Steuer-Nr.: 217/5717/0536
Ust.Id.-Nr.: DE 204051920
--
This email transmission and any documents, files or previous email
messages attached to it may contain information that is confidential or
legally privileged. If you are not the intended recipient or a person
responsible for delivering this transmission to the intended recipient,
you are hereby notified that any disclosure, copying, printing,
distribution or use of this transmission is strictly prohibited. If you
have received this transmission in error, please immediately notify the
sender by telephone or return email and delete the original transmission
and its attachments without reading or saving in any manner.
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems