Markus M. wrote:
is there a known problem with IPaddr(2) when defining many (in my case:
11) ip resources which are started/stopped concurrently?
Well... some further investigation revealed that it seems to be a
problem with the way how the ip addresses are assigned.
When looking at the
Hello,
is there a known problem with IPaddr(2) when defining many (in my case:
11) ip resources which are started/stopped concurrently?
In my case (CentOS5, latest pacemaker) the resources are starting up
fine, but when shutting down pacemaker (also during a cluster switch),
sometimes one or
Andrew Beekhof wrote:
Unfortunately it doesn't fix the problem. Heartbeat still hangs:
The pacemaker patch wont affect heartbeat-based clusters. Sorry.
Maybe i wasn't very clear in my communication, we _are_ using pacemaker
together with heartbeat for the cluster communication.
I applied
Lars Ellenberg wrote:
I've seen this too,
a few times.
>...
And I don't yet have a reliable way to reproduce it, either.
If you have, let us know!
We are using a simple shell script which executes /etc/init.d/heartbeat
start/stop using different delays between start/stop (starts with 60
sec
Hello,
sometimes "heartbeat stop" seems to hang (latest packets from
clusterlabs.org, RHEL5 x86_64, 2-node cluster with only one node running).
The last lines from ha-debug are like this:
Feb 22 12:52:48 dbprod21 ccm: [24053]: info: client (pid=24058) removed
from ccm
Feb 22 12:52:48 dbprod2
Hello,
Dejan Muhamedagic wrote:
>> returning the value of 100 seconds for the stop action? Is there
>> another place to set the timeout for the stop action of this ra?
>Yes, in the cluster configuration. Like this:
Thank you, i see, and it works now!
This was really a RTFM question, sorry. But
Dejan Muhamedagic wrote:
Operations' defaults (advisory minimum):
>>
stop timeout=100
So it seems for the "stop" action there is a timeout of 100 seconds
defined. But at cluster shutdown i can see this in the ha-debug log:
It says above that it's "advisory minimum" (the wording shou
Hello,
i've a question about metadata returned by an ocf resource agent using
the "meta-data" command and the behaviour of the cluster.
When checking the resource agent's metadata using crm i get this:
# crm
crm(live)# ra
crm(live)ra# meta cluster_oracle ocf
bla (ocf:heartbeat:cluster_oracle
Dejan Muhamedagic wrote:
There should be /etc/init.d/logd.
Thanks, it there. Some things are just too easy...
Regards
Markus
___
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Andrew Beekhof wrote:
I personally just go with syslog.
Lars on the other-hand swears by ha_logd.
Up to you.
OK, but shouldn't ha_logd get started automatically by the heartbeat
start script? Is there a startscript available for ha_logd? (not that i
am not able to create one, but why re-inve
Hello,
i am using heartbeat+pacemaker. What is the best practice for logging
these days? Using ha_logd or syslog?
As i found out, the ha_logd is not started autmatically in
/etc/init.d/heartbeat, even if heartbeat is configured in
/etc/ha.d/ha.cf to use logd, leading to messages like
WARN:
11 matches
Mail list logo