I think I know why this happened after I enabled 'verbose' for
fence_ipmilan.
When I firstly configure stonith, I set lanplus as true, however, my
machine is not HP one so lanplus is not supported. When I notice this, I
use 'crm configure load update' to update the stonith to set lanplus as
false.
Hi:
I configure stonith on CentOS 6.2 with fence-ipmilan agent:
primitive node2-stonith stonith:fence_ipmilan \
params pcmk_host_list="node2" pcmk_host_check="static-list"
ipaddr="192.168.170.1" login="root" passwd="123" lanplus="false"
power_wait="1"
The IPaddr for IPMI and credentials ar
On Tuesday, October 8, 2013 "Stefan Botter" wrote:
> James, compare my configuration with yours.
> Take a look especially on the location and colocation contraints.
>
> What did you try in the meantime?
> Start from bottom up, with a fresh configuration, and then add
> resources and constraints o
Hi,
On Tue, Oct 08, 2013 at 11:14:41AM +0200, D.Gossrau wrote:
> Hi,
>
> I'm seeing an error message in the log file regarding missing
> ha_logger command:
>
> Oct 08 03:17:06 apas5-prod-i64-01 lrmd: [1187]: info: RA output:
> (resTOMCAT:monitor:stderr)
> /usr/lib/ocf/resource.d/heartbeat/.ocf-s
Beautiful, thanks.
On Tue, Oct 8, 2013 at 2:55 PM, Lars Marowsky-Bree wrote:
> On 2013-10-08T12:56:16, Sam Gardner wrote:
>
> > Is there any way to simply monitor the response of an arbitrary ocf
> monitor
> > call, and immediately fail the affected resource over?
>
> Yes. Set migration-thresh
On 2013-10-08T12:56:16, Sam Gardner wrote:
> Is there any way to simply monitor the response of an arbitrary ocf monitor
> call, and immediately fail the affected resource over?
Yes. Set migration-threshold=1 for either the individual resource or
globally.
Regards,
Lars
--
Architect Stor
On Sun, 6 Oct 2013 02:08:24 +0800 Gray Wen Wen
wrote:
> Hi all,
> now I am trying to configure a dual DRBD with mysql
> i wanna use active/active mode without any loadbalance.
> so my drbd is primary/primary on node1 and node2.
> the mount point is /mysql
> and i configure everything for mysql
> t
Hi All -
I've run through the Clusters from Scratch guide and have a decent grasp on
how to set up basic functionality such that resources failover on node
failure (ie, if node-A goes down, all resources are shunted over to node-B).
I have a high-level concept that I fear I am failing to grasp.
On Oct 8, 2013, at 9:33 AM, Lars Marowsky-Bree wrote:
> On 2013-10-08T09:29:14, Sean Lutner wrote:
>
>> The clone was created using the interleave=true option, yes.
>
> Ok, so pcs hides that (interesting to know).
>
>> Does this have an affect on what I'm trying to accomplish?
>
> Yes, if
On 2013-10-08T09:29:14, Sean Lutner wrote:
> The clone was created using the interleave=true option, yes.
Ok, so pcs hides that (interesting to know).
> Does this have an affect on what I'm trying to accomplish?
Yes, if you hadn't set that, it might have been an explanation. My best
guess rig
On Oct 8, 2013, at 6:35 AM, Lars Marowsky-Bree wrote:
> On 2013-10-07T11:33:28, Sean Lutner wrote:
>
>> Clone: EIP-AND-VARNISH-clone
>> Group: EIP-AND-VARNISH
>> Resource: Varnish (provider=redhat type=varnish.sh class=ocf)
>>Operations: monitor interval=30s
>> Resource: Varnishlog (p
On 2013-10-07T11:33:28, Sean Lutner wrote:
> Clone: EIP-AND-VARNISH-clone
> Group: EIP-AND-VARNISH
>Resource: Varnish (provider=redhat type=varnish.sh class=ocf)
> Operations: monitor interval=30s
>Resource: Varnishlog (provider=redhat type=varnishlog.sh class=ocf)
> Operations
Hi,
I'm seeing an error message in the log file regarding missing ha_logger
command:
Oct 08 03:17:06 apas5-prod-i64-01 lrmd: [1187]: info: RA output:
(resTOMCAT:monitor:stderr)
/usr/lib/ocf/resource.d/heartbeat/.ocf-shellfuncs: line 202: ha_logger:
command not found
Which packages are nee
On Mon, 7 Oct 2013 08:25:38 -0700 (PDT)
James Oakley wrote:
> On Sunday, October 6, 2013 "Stefan Botter"
> wrote:
> > Hi Andrew,
> ...snip...
>
> All of these replies make me hopeful that someone is going to answer
> my question from the original message in this thread. Sadly, it
> turned into
14 matches
Mail list logo