-Ursprüngliche Nachricht-
Von: linux-ha-boun...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org] Im Auftrag von Stallmann,
Andreas
Gesendet: Freitag, 29. April 2011 10:39
An: General Linux-HA mailing list
Betreff: Re: [Linux-HA] Auto Failback despite location
.
Cheers and thanks again for your support,
Andreas
-Ursprüngliche Nachricht-
Von: linux-ha-boun...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org] Im Auftrag von Stallmann, Andreas
Gesendet: Freitag, 29. April 2011 10:39
An: General Linux-HA mailing list
Betreff: Re: [Linux-HA
Hi!
I configured my nodes *not* to auto failback after a defective node comes back
online. This worked nicely for a while, but now it doesn't (and, honestly, I do
not know what was changed in the meantime).
What we do: We disconnect the two (virtual) interfaces of our node mgmt01
(running on
On 4/29/2011 at 03:36 AM, Stallmann, Andreas astallm...@conet.de wrote:
Hi!
I configured my nodes *not* to auto failback after a defective node comes
back online. This worked nicely for a while, but now it doesn't (and,
honestly, I do not know what was changed in the meantime).
hi all need help regarding ip failback as in two node cluster how to stop ip
failback as i have set default-resource-stickiness to INFINITYafter this also
it is failing back i have no constraints in my cib.xml file check out cib
and please help as already crossed deadline here is my cib:
cib
Thank you for your reply. I have been trying to get rsc_order
constraints to do what I need, but cannot seem to get it right.
I can use order constraints to control when resources start/stop but
can't seem to affect when resources migrate.
I've tried just about every from/to start/stop
On Thu, Dec 11, 2008 at 00:21, Daniel DeFreez defre...@students.sou.edu wrote:
Hello all,
I've been playing with heartbeat 2 for a few weeks and have a good handle on
most of the basics, but I am having trouble with some of the subtleties. I'm
trying to prevent a resource from from returning
Hello all,
I've been playing with heartbeat 2 for a few weeks and have a good
handle on most of the basics, but I am having trouble with some of the
subtleties. I'm trying to prevent a resource from from returning to its
preferred node until resources that are co-located with it have
I have this config (part of it):
With this values I have only one failure (with mathematical function) :
150 -100 +50 / abs(-100) = 1, only one failure (I think...) . The
resource moves all the time to node portatil1 when it is recovered.
After a portatil1 failure IP_ADDR goes to portatil2
Szasz Tamas escribió:
Hi list,
I updated the drbd to the new version(8.0.7), and I want to disable
the auto failback option, because, when the primary machine halt, then
start, the drbd cannot be started, because the data on the old-primary
node is not up to date. The problem is, that I
If you change the value form 0 to INFINITY in the line
/ nvpair // id=cib-bootstrap-options-default-resource-stickiness
// name=default-resource-stickiness value=0/
/The heartbeat will not fail back when the preferred system is restarted.
regards
Adrian
/
/
Hi list,
I updated the drbd to the new version(8.0.7), and I want to disable the
auto failback option, because, when the primary machine halt, then
start, the drbd cannot be started, because the data on the old-primary
node is not up to date. The problem is, that I disabled the auto
failback
On 10/7/07, Stefano Colombo [EMAIL PROTECTED] wrote:
I have a question regarding how to configure a group to correctly fail
to the second node and prevent automatic failback
I wrote an OCF agent to control vmware machines. My problems are:
- How can I let the administrator to manually
I have a question regarding how to configure a group to correctly fail
to the second node and prevent automatic failback
I wrote an OCF agent to control vmware machines. My problems are:
- How can I let the administrator to manually fail the resource to
the other node ?
- I tried
On 10/4/07, Sripathi, Roopa (Roopa) [EMAIL PROTECTED] wrote:
Hi,
Attached is the input.xml generated from running command :
ptest -L -VV --save-input input.xml
The only way failback is happening is by running :
crm_resource -C -r IPaddr_cluster -H roopa1
crm_resource -C -r RES_X
Hi all,
now I reached a new level in the HAv2 (2.1.2) adventure game. :-)
One question here: After failover my ressources stick at the
second node, which is really what I wanted.
If the first and primary node is up back again, how can I
simply switch the cluster and all ressources manually back
Am Freitag, 17. August 2007 11:04 schrieb matilda matilda:
Hi all,
now I reached a new level in the HAv2 (2.1.2) adventure game. :-)
One question here: After failover my ressources stick at the
second node, which is really what I wanted.
If the first and primary node is up back again, how
Bernd Eichenberg wrote:
Hi at all,
I've to warn you, because I'm a newby at HA and I'm german
with very small english skillz.
Hope you understand my problem and thank for that.
My Problem is, there is no failback after the 1.node is valid
again.
1. node is Suse 8 with heartbeat that
18 matches
Mail list logo