# HG changeset patch
# User Florian Haas [EMAIL PROTECTED]
# Date 1214836739 -7200
# Node ID 2a324142157c3ee12fdb361918eb6b309454364b
# Parent 0ee5e0760c72e89b34eb889a812f72d48fa3efdd
An OCF RA capable of enabling and disabling network routes (using ip route),
named Route.
Please see
Hello,
Changes to earlier patch:
- Require clone_node_max=1 when running as a clone
- Describe use case in meta-data
- Require validate to complete successfully for all operations other
than usage and meta-data (thanks Dejan)
- Change validate return code from OCF_ERR_ARGS to OCF_ERR_INSTALLED
Sorry, log was missed out...
TA,
Ivan
On Mon, 2008-06-30 at 17:58 +1200, Ivan wrote:
Hi,
Sorry to make too much noise about the subject but I am desperate to fix
my HASI cluster.
My suspicion is that the Xen RA is broken. When a VM gets migrated to
another node in a 2 node cluster the
On Fri, Jun 27, 2008 at 11:50, Nikola Ciprich [EMAIL PROTECTED] wrote:
ahhh. so you're using drbd8 then?
did you need any changes to the RA at all?
Nope, didn't need to change anything, everything works like a charm :-)
Unfortunately it seems that fix You proposed has broken things further,
On Fri, Apr 4, 2008 at 08:25, Dominik Klein [EMAIL PROTECTED] wrote:
Lars Marowsky-Bree wrote:
On 2008-04-03T13:59:36, Dejan Muhamedagic [EMAIL PROTECTED] wrote:
Any crm* program is significantly slower on a non-DC node
regardless of whether something's happening in the cluster. It's
always
Hello,
I am trying to do an update of resources to include a stonith clone. I
can't update but clone of stonith is configure as other clone resources
of the cluster. This is my config file:
resources
master_slave id=MySQL_Server
instance_attributes id=mysql_server_1
On Mon, Jun 30, 2008 at 09:24:42AM +0200, Andrew Beekhof wrote:
Thats because the cluster can't promote any drbd instances to master.
The RA seems not to be setting a preference for being promoted (by
calling crm_master)
Mmm, well, I have to admit I don't understad much now :(. So is it mistake
Mori-san fixed this :)
See attached.
It seems that the process spawned by Heartbeat keep holding the crmd-lrmd
channel.
Thanks,
Junko
Lars, what do you think about having the IPC polling code do a wait()
on the farside PID?
Because heartbeat (which does the wait) seems to be able to notice
On 2008-06-30T18:48:42, Junko IKEDA [EMAIL PROTECTED] wrote:
Mori-san fixed this :)
See attached.
It seems that the process spawned by Heartbeat keep holding the crmd-lrmd
channel.
Thanks for the patch, merged!
Regards,
Lars
--
Teamlead Kernel, SuSE Labs, Research and Development
Here is the error in log:
cib[6886]: 2008/06/30_12:09:19 ERROR: No declaration for attribute
failstop-type of element primitive
cib[6886]: 2008/06/30_12:09:19 ERROR: activateCibXml: Updated CIB does
not validate against /usr/share/heartbeat/crm.dtd... ignoring
cib[6886]: 2008/06/30_12:09:19
Didn't read everything, but
clone id=pingd
instance_attributes id=pingd
might ring a bell.
Regards
Dominik
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also:
On Mon, Jun 30, 2008 at 11:48, Junko IKEDA [EMAIL PROTECTED] wrote:
Mori-san fixed this :)
See attached.
It seems that the process spawned by Heartbeat keep holding the crmd-lrmd
channel.
good work!
___
Linux-HA mailing list
Dominik Klein escribió:
Didn't read everything, but
clone id=pingd
instance_attributes id=pingd
might ring a bell.
Thank you, but this is not the problem. I change it as well to test but
the problem is there. I have this error message:
No declaration for attribute
On Mon, Jun 30, 2008 at 12:26, Adrian Chapela
[EMAIL PROTECTED] wrote:
Dominik Klein escribió:
Didn't read everything, but
clone id=pingd
instance_attributes id=pingd
might ring a bell.
Thank you, but this is not the problem. I change it as well to test but the
problem is
Andrew Beekhof escribió:
On Mon, Jun 30, 2008 at 12:26, Adrian Chapela
[EMAIL PROTECTED] wrote:
Dominik Klein escribió:
Didn't read everything, but
clone id=pingd
instance_attributes id=pingd
might ring a bell.
Thank you, but this is not the
Hi,
can you give a short outline how to include the heartbeat-snmp-subagent into
the main agent?
Can v1-clusters be monitored using snmp, too?
Kind regards, Nils
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Michael Schwartzkopff
Sent:
Hi,
I think this is rather an lvs-question.
Do you have the vip-address up and running on your real servers?
What kind of lvs-method do you use?
btw - why do you need heartbeat for this?
Kind regards, Nils
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
On 2008-06-27T15:47:01, Andrew Beekhof [EMAIL PROTECTED] wrote:
A non-trivial amount of lost messages :-(
Now keep in mind that CTS keeps the cluster in a constant state of upheaval
and that this was an 8-node cluster.
I go one step beyond what you recommend and keep the logfile on the CTS
On Mon, Jun 30, 2008 at 6:29 AM, Hildebrand, Nils, 232
[EMAIL PROTECTED] wrote:
Hi,
I think this is rather an lvs-question.
Do you have the vip-address up and running on your real servers?
The vip address is configured on the real servers on lo:0.
What kind of lvs-method do you use?
I am trying to get a very rough idea of how many simultaneous
connections (to apache servers)
ldirectord could handle using direct routing, assuming gigabit
connections and decent
hardware.
Unless you are running an Amazon or Sourceforge, I wouldn't think you
would never need to
worry about
Hi,
I noticed a long time ago that the Basic sanity test doesn't work well
for older Linux versions.
I finally tracked it down (against heartbeat 2.1.3):
lrmd is not linked against libxml2 if it does not supply a certain XML
functionality (xmlReadMemoty) though older versions of libxml2 do not
Why not just run 2+ instances of ldirectord, managing different
virtual IPs, and mapping them into one DNS name?
Like this:
www.xxx.com IN A ip1
www.xxx.com IN A ip2
WWW.xxx.com IN A ip3
ip1,ip2,ip3 are all ldirectord-managed virtual IPs. Since
Heartbeat can keep all these IPs always
The full set of options I'm using now is:
options { long_hostnames(off); sync(0); perm(0640); stats(3600);
check_hostname(no); dns_cache(yes); dns_cache_size(100);
log_fifo_size(4096); keep_hostname(yes); chain_hostnames(no); };
Very nice. Thank you. !
I'm a newbie...1st time ever hearing about heartbeat when I started in
this company a couple of years ago. I don't think the cluster was set
up correctly but here I am trying to figure out how to get this working.
One of the servers died and I'm trying to re-install heartbeat. The
previous
Hildebrand, Nils, 232 schrieb:
Hi,
can you give a short outline how to include the heartbeat-snmp-subagent into
the main agent?
Can v1-clusters be monitored using snmp, too?
Kind regards, Nils
Hi,
short answer:
everything about heartbeat integration in the net-snmp system and the
On Mon, Jun 30, 2008 at 10:56:54AM -0500, Randy Evans wrote:
I am trying to get a very rough idea of how many simultaneous
connections (to apache servers)
ldirectord could handle using direct routing, assuming gigabit
connections and decent
hardware.
Unless you are running an Amazon or
On Mon, Jun 30, 2008 at 7:26 PM, Simon Horman [EMAIL PROTECTED] wrote:
It is actually IPVS aka LVS that handles connections and not ldirectord.
If you have reasonably modern hardware then the limitation is likely to be
the gigabit network and not LVS. And the limitation is likely to be
27 matches
Mail list logo