[Yahoo-eng-team] [Bug 1749045] [NEW] Used mem in numa_topology do not include mem used by instance which is not fix mem_page_size

2018-02-12 Thread liuxiuli
Public bug reported:

Used mem in numa_topology does not include mem used by instance which is not 
fix mem_page_size.
For example:
An instance which is not fix hw:mem_page_size but fix hw:cpu_policy will 
consume mem whose page size is 4K, but at this time the used mem of 4K in 
numa_topology of host does not include these mem.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1749045

Title:
  Used mem in numa_topology do not include mem used by instance which is
  not fix mem_page_size

Status in OpenStack Compute (nova):
  New

Bug description:
  Used mem in numa_topology does not include mem used by instance which is not 
fix mem_page_size.
  For example:
  An instance which is not fix hw:mem_page_size but fix hw:cpu_policy will 
consume mem whose page size is 4K, but at this time the used mem of 4K in 
numa_topology of host does not include these mem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1749045/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714895] [NEW] Instance failed to create when numa node has enough free vcpus but has not enough sibling vcpus

2017-09-04 Thread liuxiuli
Public bug reported:

Instance failed to create when numa node has enough free vcpus but has not 
enough sibling vcpus.
eg:
Host has following numa topology:
node 0: 0-7,16-23
node 1: 8-15,24-31
vcpu_pin_set is 1-7,16-23,7-15,24-31
I want to create an instance with 30 vcpus with flavor which has extra_specs: 
hw:cpu_policy=dedicated and hu:numa_nodes=2. But it failed for the following 
condition in _get_pinning function:
if threads_no * len(sibling_set) < (
len(instance_cores) + num_cpu_reserved):
return None, None
because:
threads_no=1 len(sibling_set)=7 len(instance_cores)=15 and threads_no=2 
len(sibling_set)=7 len(instance_cores)=15

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714895

Title:
  Instance failed to create when numa node has enough free vcpus but has
  not enough sibling vcpus

Status in OpenStack Compute (nova):
  New

Bug description:
  Instance failed to create when numa node has enough free vcpus but has not 
enough sibling vcpus.
  eg:
  Host has following numa topology:
  node 0: 0-7,16-23
  node 1: 8-15,24-31
  vcpu_pin_set is 1-7,16-23,7-15,24-31
  I want to create an instance with 30 vcpus with flavor which has extra_specs: 
hw:cpu_policy=dedicated and hu:numa_nodes=2. But it failed for the following 
condition in _get_pinning function:
  if threads_no * len(sibling_set) < (
  len(instance_cores) + num_cpu_reserved):
  return None, None
  because:
  threads_no=1 len(sibling_set)=7 len(instance_cores)=15 and threads_no=2 
len(sibling_set)=7 len(instance_cores)=15

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1712778] [NEW] a typo of host_topology_and_format_from_host doc

2017-08-24 Thread liuxiuli
Public bug reported:

Identify the type received and return either an instance of 
objects.NUMATopology if host's NUMA topology is available, else None.
should:
Identify the type received and return either an instance of 
objects.NUMATopology if host's NUMA topology is available, or None.

** Affects: nova
 Importance: Undecided
 Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1712778

Title:
  a typo of host_topology_and_format_from_host doc

Status in OpenStack Compute (nova):
  New

Bug description:
  Identify the type received and return either an instance of 
objects.NUMATopology if host's NUMA topology is available, else None.
  should:
  Identify the type received and return either an instance of 
objects.NUMATopology if host's NUMA topology is available, or None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1712778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1712763] [NEW] a typo of format_cpu_spec doc

2017-08-24 Thread liuxiuli
Public bug reported:

"It allow_ranges is true" should be "If allow_ranges is true"

** Affects: nova
 Importance: Undecided
     Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1712763

Title:
  a typo of format_cpu_spec doc

Status in OpenStack Compute (nova):
  New

Bug description:
  "It allow_ranges is true" should be "If allow_ranges is true"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1712763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1704293] [NEW] We can not set volume's type when creating a vm from image by creating a volume

2017-07-14 Thread liuxiuli
Public bug reported:

Description
===
We often want to know type of volume used by vm. But we can not set volume's 
type when creating a vm from image by creating a volume.

** Affects: nova
 Importance: Undecided
 Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1704293

Title:
  We can not set volume's type when creating a vm from image by creating
  a volume

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  We often want to know type of volume used by vm. But we can not set volume's 
type when creating a vm from image by creating a volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1704293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658415] [NEW] The name of _get_vm_ref_from_uuid is not consistent with its implementation

2017-01-22 Thread liuxiuli
Public bug reported:

Bug description:
  The meaning of _get_vm_ref_from_uuid is to get vm_ref by instance uuid, but 
in its implementation it is to get vm_ref by instance name. It is a good idea 
to change the function name to be consistent with its implementation.

** Affects: nova
 Importance: Low
 Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1658415

Title:
  The name of _get_vm_ref_from_uuid is not consistent with its
  implementation

Status in OpenStack Compute (nova):
  New

Bug description:
  Bug description:
The meaning of _get_vm_ref_from_uuid is to get vm_ref by instance uuid, but 
in its implementation it is to get vm_ref by instance name. It is a good idea 
to change the function name to be consistent with its implementation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1658415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599435] [NEW] help of numa_get_reserved_huge_pages is not accurate.

2016-07-06 Thread liuxiuli
Public bug reported:

version:master

problem:
help of numa_get_reserved_huge_pages is not accurate. 
raises: exceptionInvalidReservedMemoryPagesOption is option is not correctly 
set.
would be:
raises: exception InvalidReservedMemoryPagesOption when reserved_huge_pages 
option is not correctly set.

** Affects: nova
 Importance: Low
 Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1599435

Title:
  help of numa_get_reserved_huge_pages is not accurate.

Status in OpenStack Compute (nova):
  New

Bug description:
  version:master

  problem:
  help of numa_get_reserved_huge_pages is not accurate. 
  raises: exceptionInvalidReservedMemoryPagesOption is option is not correctly 
set.
  would be:
  raises: exception InvalidReservedMemoryPagesOption when reserved_huge_pages 
option is not correctly set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1599435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599417] [NEW] NUMATopologyFilter will not choose numa node which has sibling but not request and other vcpus match request when only one node has sibling while host has two node

2016-07-06 Thread liuxiuli
Public bug reported:

version:master
problem:
NUMATopologyFilter will not choose numa node which has sibling but not 
satisfies request and other vcpus satisfy request when cpu_thread_policy is 
prefer.

for example:
host has cpu info:
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31

and vcpu_pin_set is 0,16-23, then the node 0's sibling is [0,16], and nod 1's 
sibling is [].
When I boot an instance with "hw:cpu_policy": "dedicated", 
"hw:cpu_thread_policy": "prefer" and 4 vcpus in flavor, NUMATopology would not 
choose node 0.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1599417

Title:
  NUMATopologyFilter will not choose numa node which has sibling but not
  request and other vcpus match request when only one node has sibling
  while host has two node when cpu_thread_policy is prefer.

Status in OpenStack Compute (nova):
  New

Bug description:
  version:master
  problem:
  NUMATopologyFilter will not choose numa node which has sibling but not 
satisfies request and other vcpus satisfy request when cpu_thread_policy is 
prefer.

  for example:
  host has cpu info:
  NUMA node0 CPU(s): 0-7,16-23
  NUMA node1 CPU(s): 8-15,24-31

  and vcpu_pin_set is 0,16-23, then the node 0's sibling is [0,16], and nod 1's 
sibling is [].
  When I boot an instance with "hw:cpu_policy": "dedicated", 
"hw:cpu_thread_policy": "prefer" and 4 vcpus in flavor, NUMATopology would not 
choose node 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1599417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599411] [NEW] NUMATopologyFilter fail when only one node has sibling while host has two node when cpu_thread_policy is require.

2016-07-06 Thread liuxiuli
Public bug reported:

version:master
problem:
NUMATopologyFilter fail when only one node has sibling while host has two node 
when cpu_thread_policy is require.

for example:
host has cpu info:
NUMA node0 CPU(s): 0-3,8-11
NUMA node1 CPU(s): 4-7,12-15

If vcpu_pin_set is 0-3,8-11, we can not boot instance with "hw:cpu_policy": 
"dedicated", "hw:cpu_thread_policy": "require" with NUMATopologyFilter in 
default_filters, and NUMATopologyFilter will fail for node 2 has no sibling.
The log info:
host fails CPU policy requirements. Host does not have hyperthreading or 
hyperthreading is disabled, but 'require' threads policy was requested.

** Affects: nova
     Importance: Undecided
 Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1599411

Title:
  NUMATopologyFilter fail when only one node has sibling while host has
  two node when cpu_thread_policy is require.

Status in OpenStack Compute (nova):
  New

Bug description:
  version:master
  problem:
  NUMATopologyFilter fail when only one node has sibling while host has two 
node when cpu_thread_policy is require.

  for example:
  host has cpu info:
  NUMA node0 CPU(s): 0-3,8-11
  NUMA node1 CPU(s): 4-7,12-15

  If vcpu_pin_set is 0-3,8-11, we can not boot instance with "hw:cpu_policy": 
"dedicated", "hw:cpu_thread_policy": "require" with NUMATopologyFilter in 
default_filters, and NUMATopologyFilter will fail for node 2 has no sibling.
  The log info:
  host fails CPU policy requirements. Host does not have hyperthreading or 
hyperthreading is disabled, but 'require' threads policy was requested.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1599411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598374] [NEW] select an not right host when resizing an instance with hw:numa_nodes=X in flavor.

2016-07-02 Thread liuxiuli
Public bug reported:

version:master
problem:
First, I create an instance with hw:numa_nodes=2 in flavor. Then I resize 
it to a flavor with hw:numa_nodes=1, and numa_topology_filter must select a 
host with two available numa nodes. I think this is error.

** Affects: nova
 Importance: Undecided
 Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1598374

Title:
  select an not right host when resizing an instance with
  hw:numa_nodes=X in flavor.

Status in OpenStack Compute (nova):
  New

Bug description:
  version:master
  problem:
  First, I create an instance with hw:numa_nodes=2 in flavor. Then I resize 
it to a flavor with hw:numa_nodes=1, and numa_topology_filter must select a 
host with two available numa nodes. I think this is error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1598374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598373] [NEW] the result of "nova hypervisor-servers hypervisor-name" is error

2016-07-02 Thread liuxiuli
Public bug reported:

version:master
problem:
   the result of "nova hypervisor-servers hypervisor-name" is error, for 
example:
the result of "nova hypervisor-servers dell-nova-1" will include the servers on 
dell-nova-11 when there are dell-nova-1 and dell-nova-11 hypervisor nodes.

** Affects: nova
 Importance: Undecided
     Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1598373

Title:
  the result of "nova hypervisor-servers hypervisor-name" is error

Status in OpenStack Compute (nova):
  New

Bug description:
  version:master
  problem:
 the result of "nova hypervisor-servers hypervisor-name" is error, for 
example:
  the result of "nova hypervisor-servers dell-nova-1" will include the servers 
on dell-nova-11 when there are dell-nova-1 and dell-nova-11 hypervisor nodes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1598373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592241] [NEW] memory_mb_used of compute node do not consider reserved_huge_pages

2016-06-13 Thread liuxiuli
Public bug reported:

version: master
question:
memory_mb_used of compute node only considers CONF.reserved_host_memory_mb. Now 
memory_mb_used is equal to sum of memory_mb which all instances used and 
CONF.reserved_host_memory_mb, But do not consider CONF.reserved_huge_pages

** Affects: nova
 Importance: Undecided
 Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1592241

Title:
  memory_mb_used of compute node do not consider reserved_huge_pages

Status in OpenStack Compute (nova):
  New

Bug description:
  version: master
  question:
  memory_mb_used of compute node only considers CONF.reserved_host_memory_mb. 
Now memory_mb_used is equal to sum of memory_mb which all instances used and 
CONF.reserved_host_memory_mb, But do not consider CONF.reserved_huge_pages

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1592241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589381] [NEW] There is an error in help info of default_notification_exchange

2016-06-06 Thread liuxiuli
Public bug reported:

version: mitaka master

question:
# Exchange name for for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification

should be:
Exchange name for sending notifications (string value)

** Affects: keystone
 Importance: Undecided
 Assignee: liuxiuli (liu-lixiu)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => liuxiuli (liu-lixiu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1589381

Title:
  There is an error in help info of default_notification_exchange

Status in OpenStack Identity (keystone):
  New

Bug description:
  version: mitaka master

  question:
  # Exchange name for for sending notifications (string value)
  #default_notification_exchange = ${control_exchange}_notification

  should be:
  Exchange name for sending notifications (string value)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1589381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586289] [NEW] openstack project list can not list the project which is domain.

2016-05-27 Thread liuxiuli
Public bug reported:

version:mitaka
question:
In project table of keystone database, there are following projects:
+--+--+---+---+-+--+---+---+
| id   | name | extra | 
description   | enabled | domain_id 
   | parent_id | is_domain |
+--+--+---+---+-+--+---+---+
| 1e424dd1844b4a7d81d5b167f188192d | heat | {}| Owns 
users and projects created by heat   |   1 | <> | 
NULL  | 1 |
| 388ba5efe7c24cd6b4762b9c6f60a5d5 | magnum   | {}| Owns 
users and projects created by magnum |   1 | <> | 
NULL  | 1 |
| 4d6e4e79ea1f4ec392475308e11a895d | admin| {}| 
Bootstrap project for initializing the cloud. |   1 | default   
   | default   | 0 |
| 749e9ebce1d24c4aa5463382c6d2c526 | demo | {}| 
  |   1 | default  | 
default   | 0 |
| 9314197e00bc43e197995681cff786cc | alt_demo | {}| 
  |   1 | default  | 
default   | 0 |
| <> | <> | {}| 
  |   0 | <> | 
NULL  | 1 |
| b79ace1760194778916e18cfb053a6d1 | service  | {}| 
  |   1 | default  | 
default   | 0 |
| cf443a4f9b9749c9a172915ce48d7989 | project_a| {}| 
  |   1 | default  | 
default   | 0 |
| d076df0e55d24881a61325cd6bb7f6f4 | project_b| {}| 
  |   1 | default  | 
default   | 0 |
| d90353b3872749719e2a5c9343f9acce | invisible_to_admin   | {}| 
  |   1 | default  | 
default   | 0 |
| default  | Default  | {}| The 
default domain|   1 | <> 
| NULL  | 1 |
+--+--+---+---+-+--+---+---+

But when I execute openstack project list, I got following result:
+--++
| ID   | Name   |
+--++
| 4d6e4e79ea1f4ec392475308e11a895d | admin  |
| 749e9ebce1d24c4aa5463382c6d2c526 | demo   |
| 9314197e00bc43e197995681cff786cc | alt_demo   |
| b79ace1760194778916e18cfb053a6d1 | service|
| cf443a4f9b9749c9a172915ce48d7989 | project_a  |
| d076df0e55d24881a61325cd6bb7f6f4 | project_b  |
| d90353b3872749719e2a5c9343f9acce | invisible_to_admin |
+--++

The project which is domain can not list, such as heat, magnum,
<>, Default.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1586289

Title:
  openstack project list can not list the project which is domain.

Status in OpenStack Identity (keystone):
  New

Bug description:
  version:mitaka
  question:
  In project table of keystone database, there are following projects:
  
+--+--+---+---+-+--+---+---+
  | id   | name | extra | 
description   | enabled | domain_id 
   | parent_id | is_domain |
  
+--+--+---+---+-+--+---+---+
  | 1e424dd1844b4a7d81d5b167f188192d | heat | {}| Owns 
users and projects created by heat   |   1 | <> | 
NULL  | 1 |
  | 388ba5efe7c24cd6b4762b9c6f60a5d5 | magnum   | {}| Owns 
users and projects created by magnum |   1 | <> | 
NULL  | 1 |
  | 4d6e4e79ea1f4ec392475308e11a895d | admin| {}| 
Bootstrap project for initializing the 

[Yahoo-eng-team] [Bug 1579664] [NEW] Nova-compute raise exception when vcpu_pin_set is set to None or"".

2016-05-09 Thread liuxiuli
Public bug reported:

Description
===
In Mitaka, Nova-compute raise exception when vcpu_pin_set is set to None 
or"".And nova-compute fails to start.

Steps to reproduce
==
Edit vcpu_pin_set=None or vcpu_pin_set=""
then restart nova-compute service

Expected result
===
Get_vcpu_total returns total_pcpus, and nova-compute service starts 
successfully.

Actual result
=
When set vcpu_pin_set to None, raise following exception and nova-compute 
service fails to start:
2016-05-09 16:38:51.517 18221 ERROR nova.openstack.common.threadgroup 
[req-e17708cc-1c77-47cc-9182-2ed072a638a4 - - - - -] Invalid inclusion 
expression 'None'
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
145, in wait
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
x.wait()
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 175, in wait
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 502, 
in run_service
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
service.start()
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 201, in start
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1575, in 
pre_start_hook
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7489, in 
update_available_resource
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
rt.update_available_resource(context)
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 521, 
in update_available_resource
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
resources = self.driver.get_available_resource(self.nodename)
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5518, in 
get_available_resource
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
data["vcpus"] = self._get_vcpu_total()
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5064, in 
_get_vcpu_total
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
available_ids = hardware.get_vcpu_pin_set()
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/virt/hardware.py", line 62, in 
get_vcpu_pin_set
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
cpuset_ids = parse_cpu_spec(CONF.vcpu_pin_set)
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/virt/hardware.py", line 117, in 
parse_cpu_spec
2016-05-09 16:38:51.517 18221 TRACE nova.openstack.common.threadgroup 
"expression %r") % rule)
2016-05-09 16:38:51.517 18221 TRACE 

[Yahoo-eng-team] [Bug 1578879] [NEW] Used hugepage of compute node does not include the mem which is used by system process

2016-05-05 Thread liuxiuli
Public bug reported:

Description
===
There are some system processes which use hugepage when compute node starts. 
But when we report numa_topology resources, we do not consider these.  And we 
do not consider it by reserving hugepage.


Expected result
===
Used NUMAPagesTopology information in compute node is all system used hugepage. 
 Or we can consider it by reserving hugepage mem.


Actual result
=
Used hugepage information in compute node is not correct. It can cause select a 
host which can not build a instance using hugepage. 


Environment
===
Os which supports KVM and hugepage.  And we build instance which uses hugepage.


Logs & Configs
==
Flavor include extra_specs hw:mem_page_size=1048576 which instance uses.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1578879

Title:
  Used hugepage of compute node does not include the mem which is used
  by system process

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  There are some system processes which use hugepage when compute node starts. 
But when we report numa_topology resources, we do not consider these.  And we 
do not consider it by reserving hugepage.

  
  Expected result
  ===
  Used NUMAPagesTopology information in compute node is all system used 
hugepage.  Or we can consider it by reserving hugepage mem.

  
  Actual result
  =
  Used hugepage information in compute node is not correct. It can cause select 
a host which can not build a instance using hugepage. 

  
  Environment
  ===
  Os which supports KVM and hugepage.  And we build instance which uses 
hugepage.

  
  Logs & Configs
  ==
  Flavor include extra_specs hw:mem_page_size=1048576 which instance uses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1578879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473308] [NEW] NUMATopologyFilter raise exception and not continue filter next node when there is no wanted pagesize in current filtered host

2015-07-10 Thread liuxiuli
Public bug reported:

version: 
2015.1.0

question:
NUMATopologyFilter raise exception and not continue filter next node when there 
is no wanted pagesize in current filtered host

Reproduce steps:
There are two compute nodes: Node1 and Node2 . 
Node1 has 2M and 4K page size, and  Node2 has 1G and 4K page size. 
set hw:mem_page_size=1048576 in flavor, and create an instance by using this 
flavor

Expected result:
NUMATopologyFilter returns Node2 to build instance

Actual result:
NUMATopologyFilter raise this exception when NUMATopologyFilter filters Node1 
and do not continue to filter Node2.

Exception during message handling: Page size 1048576 is not supported by the 
host.
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py, line 142, in 
inner
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/scheduler/manager.py, line 86, in 
select_destinations
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
filter_properties)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py, line 67, 
in select_destinations
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
filter_properties)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py, line 
138, in _schedule
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
filter_properties, index=num)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py, line 524, in 
get_filtered_hosts
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher hosts, 
filter_properties, index)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/filters.py, line 78, in 
get_filtered_objects
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher list_objs 
= list(objs)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/filters.py, line 44, in filter_all
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher if 
self._filter_one(obj, filter_properties):
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/scheduler/filters/__init__.py, line 27, 
in _filter_one
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py,
 line 54, in host_passes
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
pci_stats=host_state.pci_stats))
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/virt/hardware.py, line 1048, in 
numa_fit_instance_to_host
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
host_cell, instance_cell, limits)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/virt/hardware.py, line 778, in 
_numa_fit_instance_cell
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
host_cell, instance_cell)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/virt/hardware.py, line 631, in 
_numa_cell_supports_pagesize_request
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher return 
verify_pagesizes(host_cell, inst_cell, [inst_cell.pagesize])
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/virt/hardware.py, line 621, in