Hi!
Obviously the logs around '2021-05-11 16:15:42 +02:00' would be interesting.
Despite of that I'll have to try that drink named "quo rum" soon...;-)
Regards,
Ulrich
>>> schrieb am 11.05.2021 um 16:43 in Nachricht
:
> Hi,
>
> I'm using a CentOS 8.3.2011 with a pacemaker‑2.0.4‑6.el8_3.1.x86_6
Single location constraint may have multiple rules, I would assume pcs
supports it. It is certainly supported by crmsh.
Yes, it is supported by pcs. First, create a location rule constraint
with 'pcs constraint location ... rule'. Then you can add more rules to
it with 'pcs constraint rule add'
On 11.05.2021 20:30, Andrei Borzenkov wrote:
> On 11.05.2021 19:03, Vladislav Bogdanov wrote:
>> Hi.
>>
>> Try
>> order o_fs_drbd0_after_ms_drbd0 Mandatory: ms_drbd0:promote fs_drbd0:start
>>
>
> This seems to work, but is not "start" implied when no operation is
> explicitly specified? I.e. both
On 11.05.2021 19:03, Vladislav Bogdanov wrote:
> Hi.
>
> Try
> order o_fs_drbd0_after_ms_drbd0 Mandatory: ms_drbd0:promote fs_drbd0:start
>
This seems to work, but is not "start" implied when no operation is
explicitly specified? I.e. both constraints are expected to be
completely identical?
>
Hi,
Dne 11. 05. 21 v 17:31 Andrei Borzenkov napsal(a):
On 11.05.2021 18:20, Alastair Basden wrote:
Hi,
So, I think the following would do it:
pcs constraint location resourceClone rule role=master score=100 uname
eq node1
pcs constraint location resourceClone rule role=master score=50 uname eq
Hi.
Try
order o_fs_drbd0_after_ms_drbd0 Mandatory: ms_drbd0:promote fs_drbd0:start
On May 11, 2021 6:35:58 PM Andrei Borzenkov wrote:
While testing drbd cluster I found errors (drbd device busy) when
stopping drbd master with mounted filesystem. I do have
order o_fs_drbd0_after_ms_drbd0 Ma
On 11.05.2021 18:20, Alastair Basden wrote:
> Hi,
>
> So, I think the following would do it:
> pcs constraint location resourceClone rule role=master score=100 uname
> eq node1
> pcs constraint location resourceClone rule role=master score=50 uname eq
> node2
>
Single location constraint may hav
While testing drbd cluster I found errors (drbd device busy) when
stopping drbd master with mounted filesystem. I do have
order o_fs_drbd0_after_ms_drbd0 Mandatory: ms_drbd0:promote fs_drbd0
and I assumed pacemaker automatically does reverse as "first stop then
demote". It does not - umount and d
Hi,
So, I think the following would do it:
pcs constraint location resourceClone rule role=master score=100 uname eq node1
pcs constraint location resourceClone rule role=master score=50 uname eq node2
But, I'm unsure about uname. In the xml example you showed, it had
#uname. Is that correct,
On 11.05.2021 17:43, fatcha...@gmx.de wrote:
> Hi,
>
> I'm using a CentOS 8.3.2011 with a pacemaker-2.0.4-6.el8_3.1.x86_64 +
> corosync-3.0.3-4.el8.x86_64 and kmod-drbd90-9.0.25-2.el8_3.elrepo.x86_64.
> The cluster consists of two nodes which are providing a ha-mariadb with the
> help of two drb
Here is the example I had promised:
pcs node attribute server1 city=LApcs node attribute server2 city=NY
# Don't run on any node that is not in LApcs constraint location DummyRes1 rule
score=-INFINITY city ne LA
#Don't run on any node that is not in NYpcs constraint location DummyRes2 rule
score
Oh wrong thread, just ignore .
Best Regards
On Tue, May 11, 2021 at 13:54, Strahil Nikolov wrote:
Here is the example I had promised:
pcs node attribute server1 city=LApcs node attribute server2 city=NY
# Don't run on any node that is not in LApcs constraint location DummyRes1 rule
score=
Here is the example I had promised:
pcs node attribute server1 city=LApcs node attribute server2 city=NY
# Don't run on any node that is not in LApcs constraint location DummyRes1 rule
score=-INFINITY city ne LA
#Don't run on any node that is not in NYpcs constraint location DummyRes2 rule
scor
On 2021-05-11 10:43 a.m., fatcha...@gmx.de wrote:
> Hi,
>
> I'm using a CentOS 8.3.2011 with a pacemaker-2.0.4-6.el8_3.1.x86_64 +
> corosync-3.0.3-4.el8.x86_64 and kmod-drbd90-9.0.25-2.el8_3.elrepo.x86_64.
> The cluster consists of two nodes which are providing a ha-mariadb with the
> help of tw
Hi,
I'm using a CentOS 8.3.2011 with a pacemaker-2.0.4-6.el8_3.1.x86_64 +
corosync-3.0.3-4.el8.x86_64 and kmod-drbd90-9.0.25-2.el8_3.elrepo.x86_64.
The cluster consists of two nodes which are providing a ha-mariadb with the
help of two drbd devices for the database and the logfiles. The corosync
On Tue, May 11, 2021 at 10:50 AM Alastair Basden
wrote:
>
> Hi Andrei, all,
>
> So, what I want to achieve is that if both nodes are up, node1
> preferentially has drbd as master. If that node fails, then node2 should
> become master. If node1 then comes back online, it should become master
> ag
In fact, this link seems to be almost what I want to do:
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_configure_the_cluster_for_drbd.html
The only missing parts are:
1. Avoid node3 and node4.
2. Preferentially run on node1 when it becomes active again.
W
Hi Andrei, all,
So, what I want to achieve is that if both nodes are up, node1
preferentially has drbd as master. If that node fails, then node2 should
become master. If node1 then comes back online, it should become master
again.
I also want to avoid node3 and node4 ever running drbd, sin
18 matches
Mail list logo