** Package changed: nova (Ubuntu) => nova
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1950186
Title:
Nova doesn't account for hugepages when scheduling VMs
To manage notifications about this bug
** Tags added: sts
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1950186
Title:
Nova doesn't account for hugepages when scheduling VMs
To manage notifications about this bug go to:
https://bugs.lau
I guess the only way would be to work with custom extra specs inside
flavors/images which can be quite a hassel and can be prone to
(human)errors. Especially when forgetting to set this for new flavors.
Otherwise I don't think of any way on how to control the scheduling for
mixed memory-backend nod
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: nova (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1950186
Title:
Nova
Even using a flavor with hw:mem_page_size='small' I am still able to
request more memory of what is physically available.
While the update of max_unit seems to be reverted to the original value
when the compute node checks its records so can't be used as a valid
workaround.
--
You received this
Another suggestion was to limit the 'max_unit' value for hypervisors
with this memory configuration to the total memory - the hugepage
configured memory - this means that the maximum footprint for a single
VM is limited.
--
You received this bug notification because you are a member of Ubuntu
Bug
Discussed with the Nova team and this is a know issue at the moment -
mixing instance types with and with NUMA configuration features such as
hugepages will create this type of issue.
The placement API (which is used for scheduling) does not track
different pagesizes so can't deal with this scenar
This can be reproduced on Focal/ussuri:
Computes:
$ os resource provider list
+--+-++--+--+
| uuid