On 02/01/20 14:30 +0100, Jan Friesse wrote:
>> I am planning to use Corosync Qdevice version 3.0.0 with corosync
>> version 2.4.4 and pacemaker 1.1.16 in a two node cluster.
>>
>> I want to know if failback can be avoided in the below situation.
>>
>>
>> 1. The pcs cluster is in split brain
On 06/01/20 11:53 -0600, Ken Gaillot wrote:
> On Fri, 2020-01-03 at 13:23 +, S Sathish S wrote:
>> Pacemaker-controld process is getting restarted frequently reason for
>> failure disconnect from CIB/Internal Error (or) high cpu on the
>> system, same has been recorded in our system logs,
On 1/6/20 8:40 AM, Jerry Kross wrote:
> Hi Klaus,
> Wishing you a great 2020!
Same to you!
> We're using 3 SBD disks with pacemaker integration. It just happened
> once and am able to reproduce the latency error messages in the system
> log by inducing a network delay in the VM that hosts the SBD
On 12/19/19 6:43 PM, JC wrote:
> OK! That did it! I ran `pcs cluster destroy --all` and edited the
> corosync.conf on all nodes adding `transport: udpu` in the totem
> block. I re-added the errant node into the nodelist and restarted the
> cluster. All nodes are present and accounted for.