>>> Klaus Wenninger <kwenn...@redhat.com> schrieb am 06.04.2021 um 14:38 in
Nachricht <8dc001d0-c3ed-d9d1-ea91-3a7ebd83b...@redhat.com>:
> On 3/29/21 12:47 PM, Reid Wahl wrote:
>>
>>
>> On Mon, Mar 29, 2021 at 3:35 AM Ulrich Windl 
>> <ulrich.wi...@rz.uni-regensburg.de 
>> <mailto:ulrich.wi...@rz.uni-regensburg.de>> wrote:
>>
>>     >>> d tbsky <tbs...@gmail.com <mailto:tbs...@gmail.com>> schrieb
>>     am 29.03.2021 um 04:01 in Nachricht
>>     <CAC6SzHLi0ufVhE3RM57e2V=t_moml5ecx8ay3gtcfgmofkd...@mail.gmail.com 
>>     <mailto:t_moml5ecx8ay3gtcfgmofkd...@mail.gmail.com>>:
>>     > Hi:
>>     >    since the vm start/stop at once will consume disk IO, I want to
>>     > start/stop the vm
>>     > one‑by‑one with delay.
>>
>>     I'm surprised that in these days of fast disks and SSDs this is
>>     still an
>>     issue.
>>     Maybe don't delay the start, but limit concurrent starts.
>>     Or maybe add some weak ordering between the VMs.
>>
>>
>> kind=Serialize does this. It makes the resources start consecutively, 
>> in no particular order. I added the comment about ocf:heartbeat:Delay 
>> because D mentioned wanting a delay... but I don't see why it would be 
>> necessary, if Serialize is used.
> Wasn't the reason behind all of that the fact that VMs claim to be started
> once the hypervisor is running (at least without any additional measures)
> and that as a consequence starting them serialized wouldn't change much -
> hence the delay (or something more elaborate that actually detects when
> services in a VM are mostly up and not imposing lots of IO or whatever
> load anymore).

That's true: If you boot a typical PVM, then the hypervisor considers the VM
started when actually pvgrub is still in its boot menu. Then there's some I/O
loading kernel and initrd, and then begins the actual I/O.
Maybe limiting the I/O bandwidth would be preferable over serializing starts.


_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to