27.11.2018 3:50, Ken Gaillot пишет:
I don't know if a cleanup would help.
Currently, it helps.
My first thought is to set the target-role to Slave, then back to
Master. This would of course leave the cluster with no master for a
period of time, but I think it would reschedule the monitors.
Congratulations for the release.
On 26/11/18 17:26 +0100, Tomas Jelinek wrote:
> Main changes compared to 0.9 branch:
>
> [...]
>
> * Python 3.6+ and Ruby 2.2+ is now required
Out of curiosity, what's the driver for such a steep Python version
lower bound?
--
Nazdar,
Jan (Poki)
On Thu, 2018-11-22 at 11:24 +0700, Субботин Никита Андреевич wrote:
> Hello,
>
> I use the old Pacemaker version (1.1.7), and I'm dealing with bug
> cl#5072 (monitor op stopping after rsc promotion), which was fixed
> in
> the next Pacemaker version (1.1.8). But due to some reasons I have
> to
On Mon, 2018-11-26 at 14:24 +0200, Klecho wrote:
> Hi again,
>
> Just made one simple "parallel shutdown" test with a strange result,
> confirming the problem I've described.
>
> Created a few dummy resources, each of them taking 60s to stop. No
> constraints at all. After that issued "stop"
I am happy to announce the latest release of pcs, version 0.10.1.
Source code is available at:
https://github.com/ClusterLabs/pcs/archive/0.10.1.tar.gz
or
https://github.com/ClusterLabs/pcs/archive/0.10.1.zip
This is the first final release of the pcs-0.10 branch.
Pcs-0.10 is the new main pcs
On Mon, 26 Nov 2018 18:27:37 +0300
George Melikov wrote:
> Some apps's data may need to be synced before it's safe to
> promote/demote/standby.
>
> For example - DRBD, it replicates data across servers, but if you shut down
> master server during resync - you'll have a split brain.
>
> Is
Some apps's data may need to be synced before it's safe to
promote/demote/standby.
For example - DRBD, it replicates data across servers, but if you shut down
master server during resync - you'll have a split brain.
Is there a way to tell pacemaker from OCF agent that it's not safe now to do
Hi again,
Just made one simple "parallel shutdown" test with a strange result,
confirming the problem I've described.
Created a few dummy resources, each of them taking 60s to stop. No
constraints at all. After that issued "stop" to all of them, one by one.
Stop operation wasn't attempted
>>> lejeczek schrieb am 23.11.2018 um 15:56 in Nachricht
<46d2baf6-a03d-9aac-fceb-7bcffb383...@yahoo.co.uk>:
> hi guys,
>
> Do we have tools or maybe outside of the cluster suite there is a way to
> backup cluster?
>
> I'm obviously talking about configuration so potentially cluster could
>