On Wed, 2018-06-06 at 07:47 +0800, Confidential Company wrote:
> On Sat, 2018-06-02 at 22:14 +0800, Confidential Company wrote:
> > On Fri, 2018-06-01 at 22:58 +0800, Confidential Company wrote:
> > > Hi,
> > >?
> > > I have two-node active/passive setup. My goal is to failover a
> > > resource
On Sat, 2018-06-02 at 22:14 +0800, Confidential Company wrote:
> On Fri, 2018-06-01 at 22:58 +0800, Confidential Company wrote:
> > Hi,
> >?
> > I have two-node active/passive setup. My goal is to failover a
> > resource once a Node goes down with minimal downtime as possible.
> > Based on my
On Sat, 2018-06-02 at 22:14 +0800, Confidential Company wrote:
> On Fri, 2018-06-01 at 22:58 +0800, Confidential Company wrote:
> > Hi,
> >
> > I have two-node active/passive setup. My goal is to failover a
> > resource once a Node goes down with minimal downtime as possible.
> > Based on my
On Fri, 2018-06-01 at 22:58 +0800, Confidential Company wrote:
> Hi,
>
> I have two-node active/passive setup. My goal is to failover a
> resource once a Node goes down with minimal downtime as possible.
> Based on my testing, when Node1 goes down it failover to Node2. If
> Node1 goes up after
On Fri, 2018-06-01 at 22:58 +0800, Confidential Company wrote:
> Hi,
>
> I have two-node active/passive setup. My goal is to failover a
> resource once a Node goes down with minimal downtime as possible.
> Based on my testing, when Node1 goes down it failover to Node2. If
> Node1 goes up after
Hi,
I have two-node active/passive setup. My goal is to failover a resource
once a Node goes down with minimal downtime as possible. Based on my
testing, when Node1 goes down it failover to Node2. If Node1 goes up after
link reconnection (reconnect physical cable), resource failback to Node1
even
will stay where it is (unless some other
> configuration is stronger than the stickiness). If you don't have
> resource-stickiness, then once you "unmove", the resource may move to
> some other node, as the cluster adjusts its idea of "best".
>
>> Thanks
>>
Subject: Re: [ClusterLabs] resource-stickiness
On 08/27/2015 02:42 AM, Rakovec Jost wrote:
Hi
it doesn't work as I expected, I change name to:
location loc-aapche-sles1 aapche role=Started 10: sles1
but after I manual move resource via HAWK to other node it auto add this line:
location cli
and...@beekhof.net
Sent: Thursday, August 27, 2015 12:20 AM
To: Cluster Labs - All topics related to open-source clustering welcomed
Subject: Re: [ClusterLabs] resource-stickiness
On 26 Aug 2015, at 10:09 pm, Rakovec Jost jost.rako...@snt.si wrote:
Sorry one typo: problem is the same
: \
timeout=600 \
record-pending=true
BR
Jost
From: Andrew Beekhof and...@beekhof.net
Sent: Thursday, August 27, 2015 12:20 AM
To: Cluster Labs - All topics related to open-source clustering welcomed
Subject: Re: [ClusterLabs] resource-stickiness
Hi list,
I have configure simple cluster on sles 11 sp4 and have a problem with
auto_failover off. The problem is that when ever I migrate resource group via
HAWK my configuration change from:
location cli-prefer-aapche aapche role=Started 10: sles2
to:
location cli-ban-aapche-on-sles1
Sorry one typo: problem is the same
location cli-prefer-aapche aapche role=Started 10: sles2
to:
location cli-prefer-aapche aapche role=Started inf: sles2
It keep change to infinity.
my configuration is:
node sles1
node sles2
primitive filesystem Filesystem \
params
12 matches
Mail list logo