Is this still a problem we need to track? Mitaka is long end of life
upstream at this point so I'm not even sure this is a problem on
upstream stable branches for which we could backport a fix.

** Changed in: nova
     Assignee: Stephen Finucane (stephenfinucane) => (unassigned)

** Changed in: nova
       Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1636338

Title:
  Numa topology not calculated for instance with numa_topology after
  upgrading to Mitaka

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  This is related to this bug
  https://bugs.launchpad.net/nova/+bug/1596119

  After upgrading to Mitaka with the above patch, a new bug surfaced. The bug 
is related to InstanceNUMACell having cpu_policy set to None. This causes 
cpu_pinning_requested to always return False.
  
https://github.com/openstack/nova/blob/master/nova/objects/instance_numa_topology.py#L112

  This will then trick computes with old NUMA instances into thinking
  that nothing is pinned, causing new instances with cpu_policy set to
  CPUAllocationPolicy.DEDICATED to potentially get scheduled on the same
  NUMA zone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1636338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to