Sorry for sending seemingly two messages on the same subject: I thought Outlook 
had "swallowed" the first messge when trying to convert from plain text to 
HTML...

-----Original Message-----
From: Users <users-boun...@clusterlabs.org> On Behalf Of Windl, Ulrich
Sent: Wednesday, October 11, 2023 10:35 AM
To: Cluster Labs - All topics related to open-source clustering welcomed 
<users@clusterlabs.org>
Subject: [EXT] Re: [ClusterLabs] Re: Limit the number of resources 
starting/stoping in parallel possible?

Hi!

I'd wish there were some better mechanism that does not treat all resources 
being the same weight:
Imagine you could assign a "score of heaviness" to each resource, and you could 
define a limit of the "total heaviness" in progress (either per node or 
clusterwide (thinking of shared filesystems))... 😉

Kind regards,
Ulrich

-----Original Message-----
From: Users <users-boun...@clusterlabs.org> On Behalf Of Knauf Steffen
Sent: Tuesday, September 19, 2023 10:11 AM
To: Cluster Labs - All topics related to open-source clustering welcomed 
<users@clusterlabs.org>
Subject: [EXT] Re: [ClusterLabs] Limit the number of resources starting/stoping 
in parallel possible?

Hi Ken,

that sounds good. I'll test the option. Perhaps we'll change something on the 
resource type, too. Our systemd Resource do some things with docker containers 
(start, stop ...). We need the real status of the dockerized application 
(perhaps via a REST Endpoint), up & running does not correspond to the acutal 
state of the dockerized application. But that's another topic 😉

Thanks and greets

Steffen
________________________________

Von: Users <users-boun...@clusterlabs.org> im Auftrag von Ken Gaillot 
<kgail...@redhat.com>
Gesendet: Montag, 18. September 2023 16:36
An: Cluster Labs - All topics related to open-source clustering welcomed 
<users@clusterlabs.org>
Betreff: Re: [ClusterLabs] Limit the number of resources starting/stoping in 
parallel possible? 
 
On Mon, 2023-09-18 at 14:24 +0000, Knauf Steffen wrote:
> Hi,
> 
> we have multiple Cluster (2 node + quorum setup) with more then 100
> Resources ( 10 x VIP + 90 Microservices) per Node.  
> If the Resources are stopped/started at the same time the Server is
> under heavy load, which may result into timeouts and an unresponsive
> server. 
> We configured some Ordering Constraints (VIP --> Microservice). Is
> there a way to limit the number of resources starting/stoping in
> parallel?
> Perhaps you have some other tips to handle such a situation.
> 
> Thanks & greets
> 
> Steffen
> 

Hi,

Yes, see the batch-limit cluster option:

https://clusterlabs.org/pacemaker/doc/2.1/Pacemaker_Explained/html/options.html#cluster-options

-- 
Ken Gaillot <kgail...@redhat.com>

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to