...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org]on Behalf Of Andrew Beekhof
Sent: Monday, 9 March 2009 7:03 PM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] live migrate
On Thu, Mar 5, 2009 at 21:09, David Pinkerton H
wrote:
> CIB as follows
Looks sane eno
-ha-boun...@lists.linux-ha.org]on Behalf Of Andrew Beekhof
Sent: Thursday, 5 March 2009 7:16 PM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] live migrate
On Thu, Mar 5, 2009 at 01:35, David Pinkerton H
wrote:
>
> Having an issue with live migrates:
>
> When I migrate a sin
Having an issue with live migrates:
When I migrate a single domU (ie. crm_resource -M -r domU) the source dom0
calls "migrate_to" and the target dom0 calls "migrate_from" - as expected.
If I execute several migrates at once, the source dom0 calls "migrate_to"
whereas the target now calls "migra
How does heartbeat know a script is capable of doing a migrate instead of
stop/start. I've been playing with the dummy script has it supports a
migrate_to/migrate_from but heartbeat never calls it.
The only other migrate script is xen - it has a allow_migrate aprameters - is
this the magic ke
Can anyone explain why, if I execute a cleanup of resources on the node, where
they are running, it takes 7 minutes before the next monitor operation is run.
I was of the understanding all monitor operation should be run in parallel.
Similarly, if I place the node in standby, one resource is mig
Is there away to trigger a live migrate from within the xen OCF script?
I'm running heartbeat 2.1.4 /xen 3.2.0 /drbd 8.2.6 and want to trigger a live
migrate (works manually) if drbd goes diskless (ie. access to san lost)
David H Pinkerton
Systems Engineer - Linux Team/Platform Services
745 S
I'm looking at how to "event" monitor a heartbeat cluster. Is there something
around that can trigger a script based on key words without having to do syslog
log scraping.
I reallly want to avoid: tail -f /var/log/heartbeat | grep "XXX"
Ideally if a resource is stopped I would like the monito
Hi,
We have a dedicate vlan for heartbeat traffic and have several different
clusters talking over this vlan.
I have added a cluster name into the ha.cf and use different authkeys for each
cluster yet my logs are constantly full of
WARN: string2msg_ll: node [x] failed authentication
H
You must configure a monitor operation for the resource. It will report the
status of the resource back to the cluster.
Also mkae sure you have tested your resource script with the ocf-tester program
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of heartbea
have you seen this page?
http://linux-ha.org/v2/Concepts/MultiState
This page does not seem logical to me. If I create a resource in a two node
cluster, as a master/slave with clone_max = 2, clone_node_max = 1,
master_max = 1, master_node_max = 1
Then that infers that one instance of the re
Could someone explain the finer points of master/slave resources as the doco
states they can be anything you want.
I want to have a resource running on two nodes.
They are only be in a master or slave status. The status can seesaw between
master and
slave both never both the same. The master
ocf_log() function calling ha_log and ha_debug instead of ha_logger
David H Pinkerton
Systems Engineer - Linux Team/Platform Services
745 Springvale Road
Mulgrave 3170
* 8544 6827
* 0488 904 232
* [EMAIL PROTECTED]
Unix Team Site:
http://portal.cmlconnect.org/portal/wps/portal/retail_suppor
I'm setting up heartbeat on top of a HORCM SAN. I have successfull written a
ocf resource script to do a horcmtakeover to switch my SAN status between P-VOL
and S-VOL depending where the resource runs.
I would like feedback and suggestions on how to report PAIR, PSOE and SSWR
statuses back int
13 matches
Mail list logo