Public bug reported:

1. version
kilo 2015.1.0, liberty


this bug is base on BP:
https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling


In the current implementation scheme:

################################################################################
 def _filter_pools_for_numa_cells(pools, numa_cells):
        # Some systems don't report numa node info for pci devices, in
        # that case None is reported in pci_device.numa_node, by adding None
        # to numa_cells we allow assigning those devices to instances with
        # numa topology
        numa_cells = [None] + [cell.id for cell in numa_cells]
#################################################################################

If some compute nodes don't report numa node info for pci devices.
Then these pci devices will be regarded as "belong to all numa node" to deal 
with by default.

This can lead to a problem:
Pci devices is not on the numa node which CPU\MEM on.
In this way, the real purpose of I/O (PCIe) Based NUMA Scheduling is not 
reached.
More serious is that the user will be wrong thought pci devices is on the numa 
node that CPU\MEM on.

The truth is, there are still many systems don't report numa node info for pci 
devices.
So, i think this bug need fixed.

** Affects: nova
     Importance: Undecided
     Assignee: jinquanni(ZTE) (ni-jinquan)
         Status: New


** Tags: numa pci

** Changed in: nova
     Assignee: (unassigned) => jinquanni(ZTE) (ni-jinquan)

** Tags added: numa pci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551504

Title:
  I/O (PCIe) Based NUMA Scheduling  can't really achieve pci numa
  binding in some cases.

Status in OpenStack Compute (nova):
  New

Bug description:
  1. version
  kilo 2015.1.0, liberty

  
  this bug is base on BP:
  https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling

  
  In the current implementation scheme:

  
################################################################################
   def _filter_pools_for_numa_cells(pools, numa_cells):
          # Some systems don't report numa node info for pci devices, in
          # that case None is reported in pci_device.numa_node, by adding None
          # to numa_cells we allow assigning those devices to instances with
          # numa topology
          numa_cells = [None] + [cell.id for cell in numa_cells]
  
#################################################################################

  If some compute nodes don't report numa node info for pci devices.
  Then these pci devices will be regarded as "belong to all numa node" to deal 
with by default.

  This can lead to a problem:
  Pci devices is not on the numa node which CPU\MEM on.
  In this way, the real purpose of I/O (PCIe) Based NUMA Scheduling is not 
reached.
  More serious is that the user will be wrong thought pci devices is on the 
numa node that CPU\MEM on.

  The truth is, there are still many systems don't report numa node info for 
pci devices.
  So, i think this bug need fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1551504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to