On Mon, 2017-08-07 at 12:54 +0200, Lentes, Bernd wrote:
> - On Aug 4, 2017, at 10:19 PM, Ken Gaillot kgail...@redhat.com wrote:
>
> > Unfortunately no -- logging, and troubleshooting in general, is an area
> > we are continually striving to improve, but there are more to-do's than
> > time to
On Mon, 2017-08-07 at 21:16 +0200, Lentes, Bernd wrote:
> - On Aug 4, 2017, at 10:19 PM, kgaillot kgail...@redhat.com wrote:
>
> > The cluster reacted promptly:
> > crm(live)# configure primitive prim_drbd_idcc_devel ocf:linbit:drbd params
> > drbd_resource=idcc-devel \
> >> op monitor in
- On Aug 4, 2017, at 10:19 PM, kgaillot kgail...@redhat.com wrote:
> The cluster reacted promptly:
> crm(live)# configure primitive prim_drbd_idcc_devel ocf:linbit:drbd params
> drbd_resource=idcc-devel \
>> op monitor interval=60
> WARNING: prim_drbd_idcc_devel: default timeout 20s for s
07.08.2017 20:39, Tomer Azran wrote:
I don't want to use this approach since I don't want to be depend on pinging to
other host or couple of hosts.
Is there any other solution?
I'm thinking of writing a simple script that will take a bond down using ifdown
command when there are no slaves avail
On 08/07/2017 12:39 PM, Tomer Azran wrote:
> I'm thinking of writing a simple script that will take a bond down using
> ifdown command when there are no slaves available and put it on
> /sbin/ifdown-local
FWIW rolling monitoring functions into start/stop instead of using
separate mon daemon h
I don't want to use this approach since I don't want to be depend on pinging to
other host or couple of hosts.
Is there any other solution?
I'm thinking of writing a simple script that will take a bond down using ifdown
command when there are no slaves available and put it on /sbin/ifdown-local
On Mon, 2017-08-07 at 17:48 +0100, lejeczek wrote:
> hi everyone
>
> I wonder, is it possible to dry-run an alert agent? Test it
> somehow without the actual event taking place?
>
>
> many thanks.
> L.
There's no special tool to do so, but it would be fairly simple to do it
by hand -- just set
On Mon, 2017-08-07 at 15:23 +0200, Lentes, Bernd wrote:
> - On Aug 4, 2017, at 10:19 PM, kgaillot kgail...@redhat.com wrote:
>
> >
> > The "ERROR" message is coming from the DRBD resource agent itself, not
> > pacemaker. Between that message and the two separate monitor operations,
> > it loo
On Mon, 2017-08-07 at 16:32 +0200, Przemyslaw Kulczycki wrote:
> Hi.
> I have a 2-node cluster with a cloned IP and nginx configured.
>
>
> user@proxy04 ~]$ sudo pcs resource show --full
> Clone: ha-ip-clone
> Meta Attrs: clone-max=2 clone-node-max=2 globally-unique=true
> resource-stickiness=
hi everyone
I wonder, is it possible to dry-run an alert agent? Test it
somehow without the actual event taking place?
many thanks.
L.
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users
Project Home:
On Mon, 2017-08-07 at 10:02 +, Tomer Azran wrote:
> Hello All,
>
>
>
> We are using CentOS 7.3 with pacemaker in order to create a cluster.
>
> Each cluster node ha a bonding interface consists of two nics.
>
> The cluster has an IPAddr2 resource configured like that:
>
>
>
> # pcs re
- On Aug 7, 2017, at 3:43 PM, Ulrich Windl
ulrich.wi...@rz.uni-regensburg.de wrote:
>>
>>>
>>> The "ERROR" message is coming from the DRBD resource agent itself, not
>>> pacemaker. Between that message and the two separate monitor operations,
>>> it looks like the agent will only run as
I read the corosync-qdevice (8) man page couple of times, and also the RH
documentation at
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/s1-quorumdev-HAAR.html
I think it will be great if you will be able to add some examples th
Hi.
I have a 2-node cluster with a cloned IP and nginx configured.
user@proxy04 ~]$ sudo pcs resource show --full
Clone: ha-ip-clone
Meta Attrs: clone-max=2 clone-node-max=2 globally-unique=true
resource-stickiness=0
Resource: ha-ip (class=ocf provider=heartbeat type=IPaddr2)
Attributes: c
>>> "Lentes, Bernd" schrieb am 07.08.2017
>>> um
15:23 in Nachricht
<746073802.14757386.1502112229201.javamail.zim...@helmholtz-muenchen.de>:
> - On Aug 4, 2017, at 10:19 PM, kgaillot kgail...@redhat.com wrote:
>
>>
>> The "ERROR" message is coming from the DRBD resource agent itself, not
STONITH is enabled and working.
-Original Message-
From: Ulrich Windl [mailto:ulrich.wi...@rz.uni-regensburg.de]
Sent: Monday, August 7, 2017 2:52 PM
To: users@clusterlabs.org
Subject: [ClusterLabs] Antw: IPaddr2 RA and bonding
>>> Tomer Azran schrieb am 07.08.2017 um 12:02
>>> in Nach
- On Aug 4, 2017, at 10:19 PM, kgaillot kgail...@redhat.com wrote:
>
> The "ERROR" message is coming from the DRBD resource agent itself, not
> pacemaker. Between that message and the two separate monitor operations,
> it looks like the agent will only run as a master/slave clone.
Btw:
Does
>>> Tomer Azran schrieb am 07.08.2017 um 12:02 in
>>> Nachricht
:
> Hello All,
>
> We are using CentOS 7.3 with pacemaker in order to create a cluster.
> Each cluster node ha a bonding interface consists of two nics.
> The cluster has an IPAddr2 resource configured like that:
>
> # pcs resource
Tomer Azran napsal(a):
Just updating that I added another level of fencing using watchdog-fencing.
With the quorum device and this combination works in case of power failure of
both server and ipmi interface.
An important note is that the stonith-watchdog-timeout must be configured in
order to
- On Aug 6, 2017, at 12:05 PM, Kristoffer Grönlund kgronl...@suse.com wrote:
>> What happened:
>> I tried to configure a simple drbd resource following
>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html-single/Clusters_from_Scratch/index.html#idm140457860751296
>> I used this simpl
- On Aug 4, 2017, at 10:19 PM, Ken Gaillot kgail...@redhat.com wrote:
>
> Unfortunately no -- logging, and troubleshooting in general, is an area
> we are continually striving to improve, but there are more to-do's than
> time to do them.
sad but comprehensible. Is it worth trying to unders
Hello All,
We are using CentOS 7.3 with pacemaker in order to create a cluster.
Each cluster node ha a bonding interface consists of two nics.
The cluster has an IPAddr2 resource configured like that:
# pcs resource show cluster_vip
Resource: cluster_vip (class=ocf provider=heartbeat type=IPaddr2
Just updating that I added another level of fencing using watchdog-fencing.
With the quorum device and this combination works in case of power failure of
both server and ipmi interface.
An important note is that the stonith-watchdog-timeout must be configured in
order to work.
After reading the f
23 matches
Mail list logo