>>> Ken Gaillot schrieb am 22.01.2021 um 00:51 in
Nachricht
:
> Hi all,
>
> A recurring request we've seen from Pacemaker users is a feature called
> "non‑critical resources" in a proprietary product and "independent
> subtrees" in the old rgmanager project.
>
> An example is a large database
>>> Ken Gaillot schrieb am 21.01.2021 um 17:24 in
Nachricht
<28f8b077a30233efa41d04688eb21e82c8432ddd.ca...@redhat.com>:
> On Thu, 2021‑01‑21 at 08:19 +0100, Ulrich Windl wrote:
>> Hi!
>>
>> I have a question about utilization‑based resource placement
>> (specifically:
Hi all,
A recurring request we've seen from Pacemaker users is a feature called
"non-critical resources" in a proprietary product and "independent
subtrees" in the old rgmanager project.
An example is a large database with an occasionally used reporting
tool. The reporting tool is colocated or
On Thu, 2021-01-21 at 08:19 +0100, Ulrich Windl wrote:
> Hi!
>
> I have a question about utilization-based resource placement
> (specifically: placement-strategy=balanced):
> Assume you have two resource capacities (say A and B) on each node,
> and each resource also has a utilization parameter
On Wed, Aug 19, 2020 at 01:10:08AM -0400, Digimer wrote:
> 3. We changed DRBD from v8.4 to 9.0, and this meant a few things had to
> change. We will integrate support for short-throw DR hosts (async "third
> node" in DRBD that is outside pacemaker). We run the resources to only
> allow a single
Hi Ulrich,
The problem is reproduced stably? could you help to share your
pacemaker crm configure and OS/lvm2/resource-agents related version
information?
I feel the problem was probably caused by lvmlock resource agent script,
which did not handle this corner case correctly.
Thanks
Gang
>>> Gang He schrieb am 21.01.2021 um 11:30 in Nachricht
<59b543ee-0824-6b91-d0af-48f66922b...@suse.com>:
> Hi Ulrich,
>
> The problem is reproduced stably? could you help to share your
> pacemaker crm configure and OS/lvm2/resource‑agents related version
> information?
OK, the problem
Hi!
I have a problem: For tests I had configured lvmlockd. Now that the tests have
ended, no LVM is used for cluster resources any more, but lvmlockd is still
configured.
Unfortunately I ran into this problem:
On OCFS2 mount was unmounted successfully, another holding the lockspace for