On Mon, Sep 4, 2023 at 4:44 PM David Dolan wrote:
>
> Thanks Klaus\Andrei,
>
> So if I understand correctly what I'm trying probably shouldn't work.
It is impossible to configure corosync (or any other cluster system
for that matter) to keep the *arbitrary* last node quorate. It is
possible to
Thanks Klaus\Andrei,
So if I understand correctly what I'm trying probably shouldn't work.
And I should attempt setting auto_tie_breaker in corosync and remove
last_man_standing.
Then, I should set up another server with qdevice and configure that using
the LMS algorithm.
Thanks
David
On Mon, 4
On Mon, Sep 4, 2023 at 1:50 PM Andrei Borzenkov wrote:
> On Mon, Sep 4, 2023 at 2:18 PM Klaus Wenninger
> wrote:
> >
> >
> >
> > On Mon, Sep 4, 2023 at 12:45 PM David Dolan
> wrote:
> >>
> >> Hi Klaus,
> >>
> >> With default quorum options I've performed the following on my 3 node
> cluster
>
On Mon, Sep 4, 2023 at 1:44 PM Andrei Borzenkov wrote:
> On Mon, Sep 4, 2023 at 2:25 PM Klaus Wenninger
> wrote:
> >
> >
> > Or go for qdevice with LMS where I would expect it to be able to really
> go down to
> > a single node left - any of the 2 last ones - as there is still qdevice.#
> > Sry
On Mon, Sep 4, 2023 at 2:18 PM Klaus Wenninger wrote:
>
>
>
> On Mon, Sep 4, 2023 at 12:45 PM David Dolan wrote:
>>
>> Hi Klaus,
>>
>> With default quorum options I've performed the following on my 3 node cluster
>>
>> Bring down cluster services on one node - the running services migrate to
>>
On Mon, Sep 4, 2023 at 2:25 PM Klaus Wenninger wrote:
>
>
> Or go for qdevice with LMS where I would expect it to be able to really go
> down to
> a single node left - any of the 2 last ones - as there is still qdevice.#
> Sry for the confusion btw.
>
According to documentation, "LMS is also
On Mon, Sep 4, 2023 at 1:18 PM Klaus Wenninger wrote:
>
>
> On Mon, Sep 4, 2023 at 12:45 PM David Dolan wrote:
>
>> Hi Klaus,
>>
>> With default quorum options I've performed the following on my 3 node
>> cluster
>>
>> Bring down cluster services on one node - the running services migrate to
>>
On Mon, Sep 4, 2023 at 12:45 PM David Dolan wrote:
> Hi Klaus,
>
> With default quorum options I've performed the following on my 3 node
> cluster
>
> Bring down cluster services on one node - the running services migrate to
> another node
> Wait 3 minutes
> Bring down cluster services on one of
On Mon, Sep 4, 2023 at 1:45 PM David Dolan wrote:
>
> Hi Klaus,
>
> With default quorum options I've performed the following on my 3 node cluster
>
> Bring down cluster services on one node - the running services migrate to
> another node
> Wait 3 minutes
> Bring down cluster services on one of
Hi Klaus,
With default quorum options I've performed the following on my 3 node
cluster
Bring down cluster services on one node - the running services migrate to
another node
Wait 3 minutes
Bring down cluster services on one of the two remaining nodes - the
surviving node in the cluster is then
I just tried removing all the quorum options setting back to defaults so no
last_man_standing or wait_for_all.
I still see the same behaviour where the third node is fenced if I bring
down services on two nodes.
Thanks
David
On Thu, 31 Aug 2023 at 11:44, Klaus Wenninger wrote:
>
>
> On Thu, Aug
On Thu, Aug 31, 2023 at 12:28 PM David Dolan wrote:
>
>
> On Wed, 30 Aug 2023 at 17:35, David Dolan wrote:
>
>>
>>
>> > Hi All,
>>> >
>>> > I'm running Pacemaker on Centos7
>>> > Name: pcs
>>> > Version : 0.9.169
>>> > Release : 3.el7.centos.3
>>> > Architecture: x86_64
>>> >
On Wed, 30 Aug 2023 at 17:35, David Dolan wrote:
>
>
> > Hi All,
>> >
>> > I'm running Pacemaker on Centos7
>> > Name: pcs
>> > Version : 0.9.169
>> > Release : 3.el7.centos.3
>> > Architecture: x86_64
>> >
>> >
>> Besides the pcs-version versions of the other
On 30.08.2023 19:23, David Dolan wrote:
Use fencing. Quorum is not a replacement for fencing. With (reliable)
fencing you can simply run pacemaker with no-quorum-policy=ignore.
The practical problem is that usually the last resort that will work
in all cases is SBD + suicide and SBD cannot
> Hi All,
> >
> > I'm running Pacemaker on Centos7
> > Name: pcs
> > Version : 0.9.169
> > Release : 3.el7.centos.3
> > Architecture: x86_64
> >
> >
> Besides the pcs-version versions of the other cluster-stack-components
> could be interesting. (pacemaker, corosync)
>
rpm -qa |
>
> >
> > Hi All,
> >
> > I'm running Pacemaker on Centos7
> > Name: pcs
> > Version : 0.9.169
> > Release : 3.el7.centos.3
> > Architecture: x86_64
> >
> >
> > I'm performing some cluster failover tests in a 3 node cluster. We have
> 3 resources in the cluster.
> > I was trying to
On Wed, Aug 30, 2023 at 2:34 PM David Dolan wrote:
> Hi All,
>
> I'm running Pacemaker on Centos7
> Name: pcs
> Version : 0.9.169
> Release : 3.el7.centos.3
> Architecture: x86_64
>
>
Besides the pcs-version versions of the other cluster-stack-components
could be interesting.
On Wed, Aug 30, 2023 at 3:34 PM David Dolan wrote:
>
> Hi All,
>
> I'm running Pacemaker on Centos7
> Name: pcs
> Version : 0.9.169
> Release : 3.el7.centos.3
> Architecture: x86_64
>
>
> I'm performing some cluster failover tests in a 3 node cluster. We have 3
> resources in the
Hi All,
I'm running Pacemaker on Centos7
Name: pcs
Version : 0.9.169
Release : 3.el7.centos.3
Architecture: x86_64
I'm performing some cluster failover tests in a 3 node cluster. We have 3
resources in the cluster.
I was trying to see if I could get it working if 2 nodes fail at
19 matches
Mail list logo