On Mon, Nov 15, 2021 at 3:32 PM S Rogers wrote:
>>
>> The only solution here - as long as fencing node on external
>> connectivity loss is acceptable - is modifying ethmonitor RA to fail
>> monitor operation in this case.
>
> I was hoping to find a way to achieve the desired outcome without resort
On 15/11/2021 12:03, Klaus Wenninger wrote:
On Mon, Nov 15, 2021 at 12:19 PM Andrei Borzenkov
wrote:
On Mon, Nov 15, 2021 at 1:18 PM Klaus Wenninger
wrote:
>
>
>
> On Mon, Nov 15, 2021 at 10:37 AM S Rogers
wrote:
>>
>> I had thought about doing that, b
On Mon, Nov 15, 2021 at 12:19 PM Andrei Borzenkov
wrote:
> On Mon, Nov 15, 2021 at 1:18 PM Klaus Wenninger
> wrote:
> >
> >
> >
> > On Mon, Nov 15, 2021 at 10:37 AM S Rogers
> wrote:
> >>
> >> I had thought about doing that, but the cluster is then dependent on the
> >> external system, and if
On Mon, Nov 15, 2021 at 1:18 PM Klaus Wenninger wrote:
>
>
>
> On Mon, Nov 15, 2021 at 10:37 AM S Rogers wrote:
>>
>> I had thought about doing that, but the cluster is then dependent on the
>> external system, and if that external system was to go down or become
>> unreachable for any reason the
On Mon, Nov 15, 2021 at 10:37 AM S Rogers wrote:
> I had thought about doing that, but the cluster is then dependent on the
> external system, and if that external system was to go down or become
> unreachable for any reason then it would falsely cause the cluster to
> failover or worse it could
I had thought about doing that, but the cluster is then dependent on the
external system, and if that external system was to go down or become
unreachable for any reason then it would falsely cause the cluster to
failover or worse it could even take the cluster down completely, if the
external
Have you tried with ping and a location constraint for avoiding hosts that
cannot ping an extrrnal system.
Best Regards,Strahil Nikolov
On Mon, Nov 15, 2021 at 0:07, S Rogers wrote:
Using on-fail=fence is what I initially tried, but it doesn't work
unfortunately.
It looks like this is b
Using on-fail=fence is what I initially tried, but it doesn't work
unfortunately.
It looks like this is because the ethmonitor monitor operation won't
actually fail when it detects a downed interface. It'll only fail if it
is unable to update the CIB, as per this comment:
https://github.com/C
The mentioned error occurs when attempting to promote the PostgreSQL
resource on the standby node, after the master PostgreSQL resource is
stopped.
For info, here is my configuration:
Corosync Nodes:
node1.local node2.local
Pacemaker Nodes:
node1.local node2.local
Resources:
Clone: public_
Hi, I'm hoping someone will be able to point me in the right direction.
I am configuring a two-node active/passive cluster that utilises the
PostgreSQL PAF resource agent. Each node has two NICs, therefore the
cluster is configured with two corosync links - one on each network (one
network is the
On Fri, 2021-11-12 at 17:31 +, S Rogers wrote:
> Hi, I'm hoping someone will be able to point me in the right
> direction.
>
> I am configuring a two-node active/passive cluster that utilises the
> PostgreSQL PAF resource agent. Each node has two NICs, therefore the
> cluster is configured wit
On 12.11.2021 20:31, S Rogers wrote:
> Hi, I'm hoping someone will be able to point me in the right direction.
>
> I am configuring a two-node active/passive cluster that utilises the
> PostgreSQL PAF resource agent. Each node has two NICs, therefore the
> cluster is configured with two corosync l
Hi, I'm hoping someone will be able to point me in the right direction.
I am configuring a two-node active/passive cluster that utilises the
PostgreSQL PAF resource agent. Each node has two NICs, therefore the
cluster is configured with two corosync links - one on each network (one
network is the
13 matches
Mail list logo