Hi,
I am caught by surprise with this behaviour of DLM:
- I have 5 nodes (test VMs)
- 3 of them have 1 vote for the corosync quorum (they are "voters")
- 2 of them have 0 vote ("non-voters")
So the corosync quorum is 2.
On the non-voters, I run DLM and an application that runs it. On DLM,
On Wed, 2017-10-11 at 09:12 +0200, Ferenc Wágner wrote:
> Donat Zenichev writes:
>
> > then resource is stopped, but nothing occurred on e-mail
> > destination.
> > Where I did wrong actions?
>
> Please note that ClusterMon notifications are becoming deprecated
> (they
Hi ClusterLabs,
I'm seeing a race condition in corosync where votequorum can have
incorrect membership info when a node joins the cluster then leaves very
soon after.
I'm on corosync-2.3.4 plus my patch
https://github.com/corosync/corosync/pull/248. That patch makes the
problem readily
On Tue, 2017-10-10 at 12:06 +0100, lejeczek wrote:
>
> On 26/09/17 13:15, Klaus Wenninger wrote:
> > On 09/26/2017 02:06 PM, lejeczek wrote:
> > > hi fellas
> > >
> > > can something like in the subject pacemaker do? And if yes then
> > > how to
> > > do it?
> >
> > You could bind ResourceA to
On Wed, Oct 11, 2017 at 01:29:40PM +0200, Stefan Krueger wrote:
> ohh damn.. thanks a lot for this hint.. I delete all the IPs on enp4s0f0, and
> than it works..
> but could you please explain why it now works? why he has a problem with this
> IPs?
AFAICT, it found a better interface with that
Hello,
when i try to migrate a ressource from one server to an other (for example for
maintenance), it don't work.
a single ressource works fine, after that I create a group with 2 ressources
and try to move that.
my config is:
crm conf show
node 739272007: zfs-serv1
node 739272008: zfs-serv2
Hello Valentin,
thanks for you help
> Can you share more info on the network of zfs-serv2, for example: ip a?
ip a s
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host
I am happy to announce the latest release of pcs, version 0.9.160.
Source code is available at:
https://github.com/ClusterLabs/pcs/archive/0.9.160.tar.gz
or
https://github.com/ClusterLabs/pcs/archive/0.9.160.zip
Be aware that support for CMAN clusters has been deprecated in this
release. This
On Wed, Oct 11, 2017 at 10:51:04AM +0200, Stefan Krueger wrote:
> primitive HA_IP-Serv1 IPaddr2 \
> params ip=172.16.101.70 cidr_netmask=16 \
> op monitor interval=20 timeout=30 on-fail=restart nic=bond0 \
> meta target-role=Started
There might be something wrong with the
Václav Mach writes:
> On 10/11/2017 09:00 AM, Ferenc Wágner wrote:
>
>> Václav Mach writes:
>>
>>> allow-hotplug eth0
>>> iface eth0 inet dhcp
>>
>> Try replacing allow-hotplug with auto. Ifupdown simply runs ifup -a
>> before network-online.target, which
Hello,
I've a simple setup with just 3 resources (at the moment), the ZFS resource
also works fine.BUT my IPaddr2 don't work, and I dont know why and how to
resolve that.
my config:
conf sh
node 739272007: zfs-serv1
node 739272008: zfs-serv2
primitive HA_IP-Serv1 IPaddr2 \
params
On 10/11/2017 09:00 AM, Ferenc Wágner wrote:
Václav Mach writes:
allow-hotplug eth0
iface eth0 inet dhcp
Try replacing allow-hotplug with auto. Ifupdown simply runs ifup -a
before network-online.target, which excludes allow-hotplug interfaces.
That means allow-hotplug
Donat Zenichev writes:
> then resource is stopped, but nothing occurred on e-mail destination.
> Where I did wrong actions?
Please note that ClusterMon notifications are becoming deprecated (they
should still work, but I've got no experience with them). Try using
Václav Mach writes:
> allow-hotplug eth0
> iface eth0 inet dhcp
Try replacing allow-hotplug with auto. Ifupdown simply runs ifup -a
before network-online.target, which excludes allow-hotplug interfaces.
That means allow-hotplug interfaces are not waited for before corosync
is
14 matches
Mail list logo