On 23.03.2022 08:30, Balotra, Priyanka wrote:
> Hi All,
>
> We have a scenario on SLES 12 SP3 cluster.
> The scenario is explained as follows in the order of events:
>
> * There is a 2-node cluster (FILE-1, FILE-2)
> * The cluster and the resources were up and running fine initially .
>
Hi!
With these messages it's really hard to say, because you omitted message
logged before the split brain had occurred.
If a resource was running on FILE-2 and FILE-1 recovered first, it will be DC
and it will start resources (even if those were running on FILE-2 before.
However normal resources
Hi All,
We have a scenario on SLES 12 SP3 cluster.
The scenario is explained as follows in the order of events:
- There is a 2-node cluster (FILE-1, FILE-2)
- The cluster and the resources were up and running fine initially .
- Then fencing request from pacemaker got
Balotra, Priyanka would like to recall the message, "Resources too_active
(active on all nodes of the cluster, instead of only 1 node)".
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home:
Hi All,
We have a scenario on SLES 12 SP3 cluster.
The scenario is explained as follows in the order of events:
* There is a 2-node cluster (FILE-1, FILE-2)
* The cluster and the resources were up and running fine initially .
* Then fencing request from pacemaker got issued on both
A quick update to 2.0.5 that fixes the tests and RPM building.
*the new ipc_sock tests needs to be run as root as otherwise each
sub-test will timeout - making the run-time huge.
*Make sure that the libstat_wrapper.so library is included in the
libqb-tests RPM (when built)
If you
Hi,
This is a tentative schedule for resource-agents v4.11.0:
4.11.0-rc1: Mar 30.
4.11.0: Apr 6.
Full list of changes:
https://github.com/ClusterLabs/resource-agents/compare/v4.10.0...main
I've modified the corresponding milestones at:
https://github.com/ClusterLabs/resource-agents/milestones
Hello Ulrich,
stop failure was very rare problem relevant only to those one particular
resource. I want to keep pacemaker restarting resources on their failures,
but I just don't want to fence the node when stop fails - it's better for
me to investigate immediately then to have unplanned node