Hi all,
I created a resource with an INFINITE stop timeout;
pcs resource create srv01-test ocf:alteeve:server name="srv01-test" meta
allow-migrate="true" target-role="stopped" op monitor interval="60"
start timeout="INFINITY" on-fail="block" stop timeout="INFINITY"
on-fail="block" migrate_to
>>> Digimer schrieb am 25.01.2021 um 19:18 in Nachricht
<18d77f26-b21b-4f2e-184c-c2280876d...@alteeve.ca>:
...
> If I understand what's been said in this thread, the host node got a
> shutdown request so it migrated the resource. Then the peer (new host)
> would have gotten the shutdown request,
BTW checked log, corosync reload failed:
-- Logs begin at Thu 2021-01-14 15:41:10 UTC, end at Tue 2021-01-26
01:42:48 UTC. --
Jan 22 14:33:09 destination-standby corosync[13180]: Starting Corosync
Cluster Engine (corosync): [ OK ]
Jan 22 14:33:09 destination-standby systemd[1]: Started Corosync
Hi All,
> pacemakerd -$
Pacemaker 1.1.15-11.el7
> corosync -v
Corosync Cluster Engine, version '2.4.0'
> rpm -qi libqb
Name: libqb
Version : 1.0.1
Please assist. Recently faced a strange bug (I suppose), when one of the
cluster nodes gets different from others "Ring ID" for example
On 2021-01-25 3:58 p.m., Ken Gaillot wrote:
> On Mon, 2021-01-25 at 13:18 -0500, Digimer wrote:
>> On 2021-01-25 11:01 a.m., Ken Gaillot wrote:
>>> On Mon, 2021-01-25 at 09:51 +0100, Jehan-Guillaume de Rorthais
>>> wrote:
Hi Digimer,
On Sun, 24 Jan 2021 15:31:22 -0500
Digimer
On Mon, 2021-01-25 at 13:18 -0500, Digimer wrote:
> On 2021-01-25 11:01 a.m., Ken Gaillot wrote:
> > On Mon, 2021-01-25 at 09:51 +0100, Jehan-Guillaume de Rorthais
> > wrote:
> > > Hi Digimer,
> > >
> > > On Sun, 24 Jan 2021 15:31:22 -0500
> > > Digimer wrote:
> > > [...]
> > > > I had a test
On 2021-01-25 11:01 a.m., Ken Gaillot wrote:
> On Mon, 2021-01-25 at 09:51 +0100, Jehan-Guillaume de Rorthais wrote:
>> Hi Digimer,
>>
>> On Sun, 24 Jan 2021 15:31:22 -0500
>> Digimer wrote:
>> [...]
>>> I had a test server (srv01-test) running on node 1 (el8-a01n01),
>>> and on
>>> node 2
On Mon, 2021-01-25 at 13:22 +0100, Ulrich Windl wrote:
> > > > Strahil Nikolov schrieb am 25.01.2021
> > > > um 12:28 in
>
> Nachricht <1768184755.3488991.1611574085...@mail.yahoo.com>:
> > Hi All,
> > As you all know migrating a resource is actually manipulating the
> > location
> > constraint
On Mon, 2021-01-25 at 09:51 +0100, Jehan-Guillaume de Rorthais wrote:
> Hi Digimer,
>
> On Sun, 24 Jan 2021 15:31:22 -0500
> Digimer wrote:
> [...]
> > I had a test server (srv01-test) running on node 1 (el8-a01n01),
> > and on
> > node 2 (el8-a01n02) I ran 'pcs cluster stop --all'.
> >
> >
On Mon, 2021-01-25 at 14:01 +0300, Andrei Borzenkov wrote:
> On Mon, Jan 25, 2021 at 12:07 PM Jehan-Guillaume de Rorthais
> wrote:
>
> > As actions during a cluster shutdown cannot be handled in the same
> > transition
> > for each nodes, I usually add a step to disable all resources using
> >
Hi All,
As you all know migrating a resource is actually manipulating the location
constraint for that resource.
Is there any plan for an option to control a default timeout which is valid for
migrations and will remove the 'cli-ban' and 'cli-preffer' location constraints
automatically after
fence_drac5 , fence_drac (not sure about that) , SBD
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Mon, Jan 25, 2021 at 11:23, Sharma, Jaikumar
wrote: ___
Manage your subscription:
Ok, that is exactly what one might expect -- and: Note that only the
failing node is in maintenance mode. The current master/primary is not in
maintenance mode, and on that node we continue to see messages in
pacemaker.log that seem to indicate that it is doing monitor operations.
Logically, if
>>> Strahil Nikolov schrieb am 25.01.2021 um 12:28 in
Nachricht <1768184755.3488991.1611574085...@mail.yahoo.com>:
> Hi All,
> As you all know migrating a resource is actually manipulating the location
> constraint for that resource.
> Is there any plan for an option to control a default timeout
Hi!
I reconfigured my cluster to let it control virtlockd (instead of just "enable"
it in systemd). However I still have problems I don't understand:
When live-migrating a Xen PV I still get these messages:
Jan 25 12:38:06 h18 virtlockd[42724]: libvirt version: 6.0.0
Jan 25 12:38:06 h18
> fence_drac5 , fence_drac (not sure about that) , SBD
Thank you Strahil , will play around and dig further.
Regards,
Jaikumar
Sent from Yahoo Mail on
On 1/25/21 9:51 AM, Jehan-Guillaume de Rorthais wrote:
> Hi Digimer,
>
> On Sun, 24 Jan 2021 15:31:22 -0500
> Digimer wrote:
> [...]
>> I had a test server (srv01-test) running on node 1 (el8-a01n01), and on
>> node 2 (el8-a01n02) I ran 'pcs cluster stop --all'.
>>
>> It appears like pacemaker
On Mon, Jan 25, 2021 at 12:07 PM Jehan-Guillaume de Rorthais
wrote:
> As actions during a cluster shutdown cannot be handled in the same transition
> for each nodes, I usually add a step to disable all resources using property
> "stop-all-resources" before shutting down the cluster:
>
> pcs
On Mon, 25 Jan 2021 10:22:20 +0100
"Ulrich Windl" wrote:
> Maybe it's time for target-role=stopped">... in CIB ;-)
Could you elaborate on what would be the differences with "stop‑all‑resources"?
Kind regards,
___
Manage your subscription:
>You need to :
>- Setup and TEST stonith
>- Add a 3rd node (even if it doesn't host any resources) or setup a
>node for kronosnet
Could somebody please suggest stonith fencing device/tool for DELL-33x rack
mounted servers?
Thank you.
Regards
Jaikumar
>>> Jehan-Guillaume de Rorthais schrieb am 25.01.2021 um
09:51 in
Nachricht <20210125095132.575f55aa@firost>:
> Hi Digimer,
>
> On Sun, 24 Jan 2021 15:31:22 ‑0500
> Digimer wrote:
> [...]
>> I had a test server (srv01‑test) running on node 1 (el8‑a01n01), and on
>> node 2 (el8‑a01n02) I ran
Hi Digimer,
On Sun, 24 Jan 2021 15:31:22 -0500
Digimer wrote:
[...]
> I had a test server (srv01-test) running on node 1 (el8-a01n01), and on
> node 2 (el8-a01n02) I ran 'pcs cluster stop --all'.
>
> It appears like pacemaker asked the VM to migrate to node 2 instead of
> stopping it. Once
I think that it makes sense, as '--all' should mean 'reach all servers and
shutdown there'.Yet, when you run 'pcs cluster stop' - the migration of the
resources is the only option.
Still, it sounds like a bug.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Sun, Jan 24,
23 matches
Mail list logo