** Tags removed: verification-needed
** Tags added: verification-done
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1978489
Title:
libvirt / cgroups v2: cannot boot instance with more than 16 CPUs
This bug was fixed in the package nova - 3:25.2.1-0ubuntu2~cloud0
---
nova (3:25.2.1-0ubuntu2~cloud0) focal-yoga; urgency=medium
.
* New update for the Ubuntu Cloud Archive.
.
nova (3:25.2.1-0ubuntu2) jammy; urgency=medium
.
*
This bug was fixed in the package nova - 3:25.2.1-0ubuntu2
---
nova (3:25.2.1-0ubuntu2) jammy; urgency=medium
* d/p/libvirt-remove-default-cputune-shares-value.patch:
Enable launch of instances with more than 9 CPUs on Jammy
(LP: #1978489).
-- Corey Bryant Tue, 16 Jan
comment #17 stated that noble and mantic have the patch, so I'm marking
the noble (devel) task as fix released.
** Changed in: nova (Ubuntu)
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Forget to add to ^ that instead of removing the default weight (1024 *
guest.vcpus) might it not have made sense to simply cap it at the max
allowed value? Again, perhaps something that could be proposed to Nova
as a new patch.
--
You received this bug notification because you are a member of
As a recap, this patch addresses the problem of moving vms between hosts
running cgroups v1 (e.g. Ubuntu Focal) and v2 (Ubuntu Jammy) which now
has a cap of 10K [1] for cpu.weight, resulting in vms with > 9 vcpus not
being able to boot if they use the default Nova 1024 * guest.vcpus. The
patch
Re:
> The same patch should also be available on cloud archive cloud:focal-
yoga
This will happen alongside the changes being made into 22.04 - the
updates are in the yoga-proposed pocket at the moment.
** Also affects: cloud-archive
Importance: Undecided
Status: New
** Also affects:
I think that the challenge of how to update the cpu tuning for all
existing running instances is solvable.
a) quota:cpu_* is an additional property for a flavor and as such can be
updated (applying to new instances created).
b) Using the virsh tool, its possible to live set the scheduling tuning
> This behavior can be recovered by setting the quota:cpu_shares flavor
extra spec.
You are the openstack experts here, but I will point out that it looks
like comment #10 already tried this.
That comment also ends with: "Is there any workaround to rebuilding
hundreds of instances like force