I've narrowed down the cause.
When the "standby" transition completes, vm2 has more remaining
utilization capacity than vm1, so the cluster wants to run sv-fencer
there. That should be taken into account in the same transition, but it
isn't, so a second transition is needed to make it happen.
On Fri, 2017-10-20 at 15:52 +0200, Ferenc Wágner wrote:
> Ken Gaillot writes:
>
> > On Fri, 2017-09-22 at 18:30 +0200, Ferenc Wágner wrote:
> > > Ken Gaillot writes:
> > >
> > > > Hmm, stop+reload is definitely a bug. Can you attach (or email
> > > >
On 2017-10-20 10:26 AM, Jan Friesse wrote:
> I am pleased to announce the latest maintenance release of Corosync
> 2.4.3 available immediately from our website at
> http://build.clusterlabs.org/corosync/releases/.
>
> This release contains a lot of fixes. New feature is support for
> heuristics
I am pleased to announce the latest maintenance release of Corosync
2.4.3 available immediately from our website at
http://build.clusterlabs.org/corosync/releases/.
This release contains a lot of fixes. New feature is support for
heuristics in qdevice.
Complete changelog for 2.4.3:
Adrian
Ken Gaillot writes:
> On Fri, 2017-09-22 at 18:30 +0200, Ferenc Wágner wrote:
>> Ken Gaillot writes:
>>
>>> Hmm, stop+reload is definitely a bug. Can you attach (or email it to
>>> me privately, or file a bz with it attached) the above pe-input file
Hi ClusterLabs,
I have a query about safely removing a node from a corosync cluster.
When "corosync-cfgtool -R" is issued, it causes all nodes to reload
their config from corosync.conf. If I have removed a node from the
nodelist but corosync is still running on that node, it will receive the