Serge Dubrouski schrieb:
Lino, were you able to make it work?
Well, I was... But I don't know how :-). I always received that same
errormessages until I shutdown both nodes and started my secondary. Then
the Clone was able to get running on my 2nd. As I powered on again the
first node the clone
Hi David,
in my version of Linux Director v1.186-ha-2.1.3
everything works fine.
I would recommend you to update your ldirectord.
Backup your config-files and install the newest packets.
Best regards,
Stephan
-Original Message-
From: David Brain [mailto:[EMAIL PROTECTED]
Sent:
Hi
I want my Stonith Agent to shutdown completely the bad node, but I
managed only to reboot it so far.
Perhaps I'm wrong, but the information on the documentation I found so
far is that the desired value for stonith-action for my purpose is
value=off.
The section of my CIB:
nvpair
Hi list,
After starting heartbeat I always see the following message in
/var/log/ha-log:
attrd[5813]: 2008/03/05_10:22:43 info: main: Starting mainloop...
then 2 minutes nothing happens (no log entry) and crmmon -1 says:
Refresh in 1s...
Last updated: Wed Mar 5 10:24:10 2008
I'm facing a nasty problem here: I have 2 clusters of 2 nodes each in
4 identical machines, running CentOS 5, running the standard packaged
heartbeat software version 2.1.3-3. My hardware and software
architecture is x86_64.
On the cluster A, I have no problems at all; on cluster B, heartbeat
On Wed, Mar 5, 2008 at 10:34 AM, Lino Moragon [EMAIL PROTECTED] wrote:
Hi
I want my Stonith Agent to shutdown completely the bad node, but I
managed only to reboot it so far.
Perhaps I'm wrong, but the information on the documentation I found so
far is that the desired value for
I think you have a network problem, your nodes are OFFLINE. If you read
Getting Started from Linux-HA, you can read initdead: With some
configurations, the network takes some time to start working after a reboot.
This is a separate deadtime to handle that case. It should be at least
twice the
-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
ha.org] Im Auftrag von Fernando Iglesias
Gesendet: Mittwoch, 5. März 2008 11:17
An: General Linux-HA mailing list
Betreff: Re: [Linux-HA] info: main: Starting mainloop...
I think you have a network problem,
Hi all.
I'm trying to doing a simple cluster fo two nodes.
Every node is master for few services and in case of failure those
services will be transfered on the slave node.
On node1 will run a mail and db server, on the node2 will run dns and web.
For this i made four drbd block devices to serv
Andreas Kurz schrieb:
On Wed, Mar 5, 2008 at 10:34 AM, Lino Moragon [EMAIL PROTECTED] wrote:
Hi
I want my Stonith Agent to shutdown completely the bad node, but I
managed only to reboot it so far.
Perhaps I'm wrong, but the information on the documentation I found so
far is that the
Hi,
On Wed, Mar 05, 2008 at 10:46:22AM +0100, Luis Motta Campos wrote:
I'm facing a nasty problem here: I have 2 clusters of 2 nodes each in
4 identical machines, running CentOS 5, running the standard packaged
heartbeat software version 2.1.3-3. My hardware and software
architecture is
Dejan Muhamedagic wrote:
Something's very wrong with the installation. Heartbeat can't load
the .so modules. There was recently another similar report also on
CentOS. Perhaps you could try with strace to see what's going on
(strace on program heartbeat).
How is it possible that something is
Dominik Klein wrote:
What exactly did you tune?
Please also post drbd.conf and logfiles.
Server1 log:
http://rafb.net/p/G6SxRN57.html
server2 log:
http://rafb.net/p/n0qjY499.html
And as attach the drbd.conf.
Thanks
Pier
global {
usage-count yes;
}
common {
syncer { rate 200M; }
}
what would be the best way to go from a 2 node v1 config using
primary/secondary
to a 2 node v2 primary/primary?
we want mysql running on both sharing a virtual IP, is this possible with
drbd/heartbeat/etc ?
Every doc I've read is a bit unclear on the primary/primary setup, they talk
about
On 2008-03-05T00:36:17, [EMAIL PROTECTED] wrote:
Yes I know, but this was actually the question, right? How can I force
all resources to move to the other node? and thats the purpose of this
command. I have to admit that I don't use drbd and I am not familiar with
master/slave devices. I
In a crm cluster with a master/slave drbd resource (ms_drbd0) and a
configured ping node;
How would one create a constraint that places the master role on the
node with the highest pingd score?
I have tried this and it is not correct syntax;
rsc_location id=l_drbd0_master rsc=ms_drbd0
-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
ha.org] Im Auftrag von Damon Estep
Gesendet: Mittwoch, 5. März 2008 18:21
An: General Linux-HA mailing list
Betreff: [Linux-HA] location constraint that ties drbd master role to pingd?
In a crm cluster
Greetings list,
i am trying to get HeartBeat 2.1.3 running on RedHat Fedora Core 3. this is
particular box is the master, the service we want HA on is MySQL and we are
using heartbeats across ethernet interfaces.
the install of heartbeat and mon went fine.
however, when i try and start the
I have an OCF IPaddr resource managed by heartbeat. It is a pseudo interface
on my physical eth0.
When I shutdown the pseudo interface or the physical NIC, the IPaddr will
fail and the backup machine will pickup all the HA resources. But if I just
pull out the ethernet cable, the IPaddr does not
Try to use pingd.
On Wed, Mar 5, 2008 at 1:25 PM, Tao Yu [EMAIL PROTECTED] wrote:
I have an OCF IPaddr resource managed by heartbeat. It is a pseudo interface
on my physical eth0.
When I shutdown the pseudo interface or the physical NIC, the IPaddr will
fail and the backup machine will
hey Lars and everyone.
i downloaded the source from the main ha web site.
tried your test suggestion on the main IP (bound to eth0) of the master
server:
[EMAIL PROTECTED]:~]$ ocf-tester -n dummy-id -o ip=10.0.0.182
/usr/lib/ocf/resource.d/heartbeat/IPaddr
Beginning tests for
Hi,
I have already made similar one. And I tested your 'xen0'. It has a
little problem. The domU's hostname is not always same as config file
name. So, I fixed it to pick up the target node from config file.
Regards
MATSUDA, Daiki
2008/2/28, Serge Dubrouski [EMAIL PROTECTED]:
Attached.
On
You are right, they can be different but the main idea was thet a user
must provide a correct node name. So I'm not sure in this patch
because it creates some kind of a loop in logic: config file depends
on a node name from hostlist, and now you are making node name to
depend on that config file.
23 matches
Mail list logo