[Yahoo-eng-team] [Bug 1622538] [NEW] Wrong "can_host" field of compute node resource providers

2016-09-12 Thread Yingxin
Public bug reported:

The "can_host" field of compute node records should be 1. However,
according to the latest placement implementation, it is 0.

mysql> select * from resource_providers;
+-+-++--+-++--+
| created_at  | updated_at  | id | uuid 
| name| generation | can_host |
+-+-++--+-++--+
| 2016-09-12 08:54:19 | 2016-09-12 09:33:41 |  1 | 
508f3973-8e1a-4241-afec-ee3e21be0611 | host1 | 80 |0 |
+-+-++--+---++--+
1 row in set (0.00 sec)

** Affects: nova
 Importance: Undecided
     Assignee: Yingxin (cyx1231st)
 Status: New


** Tags: placement

** Changed in: nova
 Assignee: (unassigned) => Yingxin (cyx1231st)

** Tags added: placement

** Description changed:

  The "can_host" field of compute node records should be 1. However,
- according to the latest placement code, it is 0.
- 
+ according to the latest placement implementation, it is 0.
  
  mysql> select * from resource_providers;
  
+-+-++--+-++--+
  | created_at  | updated_at  | id | uuid   
  | name| generation | can_host |
  
+-+-++--+-++--+
  | 2016-09-12 08:54:19 | 2016-09-12 09:33:41 |  1 | 
508f3973-8e1a-4241-afec-ee3e21be0611 | host1 | 80 |0 |
  
+-+-++--+---++--+
  1 row in set (0.00 sec)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1622538

Title:
  Wrong "can_host" field of compute node resource providers

Status in OpenStack Compute (nova):
  New

Bug description:
  The "can_host" field of compute node records should be 1. However,
  according to the latest placement implementation, it is 0.

  mysql> select * from resource_providers;
  
+-+-++--+-++--+
  | created_at  | updated_at  | id | uuid   
  | name| generation | can_host |
  
+-+-++--+-++--+
  | 2016-09-12 08:54:19 | 2016-09-12 09:33:41 |  1 | 
508f3973-8e1a-4241-afec-ee3e21be0611 | host1 | 80 |0 |
  
+-+-++--+---++--+
  1 row in set (0.00 sec)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1622538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599400] [NEW] nova boot has unexpected API error

2016-07-06 Thread Yingxin
Public bug reported:

Description:
=

Nova allow users to set free-form flavor extra-specs "hw:cpu_policy",
"hw:cpu_thread_policy". But  when boot an instance with that flavor,
nova will raise ValueError and result in HTTP 500.

Reproduce:
=

# 1. create flavor 11 with an illegal extra_spec "hw:cpu_thread_policy=shared"
$ nova flavor-create test 11 128 1 3
$ nova flavor-key 11 set hw:cpu_policy=dedicated
$ nova flavor-key 11 set hw:cpu_thread_policy=shared

# 2. boot an instance from that malformed flavor 11
$ nova boot --image  --flavor 11 test

ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-a26ad5f3-7982-4361-8817-0ab111ac9ab1)

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Description:
  =
  
  Nova allow users to set free-form flavor extra-specs "hw:cpu_policy",
  "hw:cpu_thread_policy". But  when boot an instance with that flavor,
  nova will raise ValueError and result in HTTP 500.
  
- 
  Reproduce:
  =
  
  # 1. create flavor 11 with an illegal extra_spec "hw:cpu_thread_policy=shared"
  $ nova flavor-create test 11 128 1 3
  $ nova flavor-key 11 set hw:cpu_policy=dedicated
- $nova flavor-key 11 set hw:cpu_thread_policy=shared
+ $ nova flavor-key 11 set hw:cpu_thread_policy=shared
  
  # 2. boot an instance from that malformed flavor 11
- $nova boot --image  --flavor 11 test
+ $ nova boot --image  --flavor 11 test
  
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-a26ad5f3-7982-4361-8817-0ab111ac9ab1)

** Summary changed:

- nova boot unexpected API error
+ nova boot has unexpected API error

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1599400

Title:
  nova boot has unexpected API error

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:
  =

  Nova allow users to set free-form flavor extra-specs "hw:cpu_policy",
  "hw:cpu_thread_policy". But  when boot an instance with that flavor,
  nova will raise ValueError and result in HTTP 500.

  Reproduce:
  =

  # 1. create flavor 11 with an illegal extra_spec "hw:cpu_thread_policy=shared"
  $ nova flavor-create test 11 128 1 3
  $ nova flavor-key 11 set hw:cpu_policy=dedicated
  $ nova flavor-key 11 set hw:cpu_thread_policy=shared

  # 2. boot an instance from that malformed flavor 11
  $ nova boot --image  --flavor 11 test

  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-a26ad5f3-7982-4361-8817-0ab111ac9ab1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1599400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523506] Re: hosts within two availability zones

2016-03-06 Thread Yingxin
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523506

Title:
  hosts within two availability zones

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  There have been a lot of bug fixes related to this topic, but it still exists 
somehow. Some previous fix-released bugs for example:
  https://bugs.launchpad.net/nova/+bug/1200479
  https://bugs.launchpad.net/nova/+bug/1196893
  https://bugs.launchpad.net/nova/+bug/1277230

  The mailing list has already decided not to allow hosts in different
  AZs (http://lists.openstack.org/pipermail/openstack-
  dev/2014-April/031803.html), but it can still be reproduced by
  following 3 steps:

   start repro 

  1) create two host aggregates "foo", "bar" to the default AZ:
  $ nova aggregate-create foo
  ++--+---+---+--+
  | Id | Name | Availability Zone | Hosts | Metadata |
  ++--+---+---+--+
  | 58 | foo  | - |   |  |
  ++--+---+---+--+
  $ nova aggregate-create bar
  ++--+---+---+--+
  | Id | Name | Availability Zone | Hosts | Metadata |
  ++--+---+---+--+
  | 59 | bar  | - |   |  |
  ++--+---+---+--+

  
  2) assign a host "node2" to both aggregates
  $ nova aggregate-add-host foo node2
  ++--+---+-+--+
  | Id | Name | Availability Zone | Hosts   | Metadata |
  ++--+---+-+--+
  | 58 | foo  | - | 'node2' |  |
  ++--+---+-+--+
  $ nova aggregate-add-host bar node2
  ++--+---+-+--+
  | Id | Name | Availability Zone | Hosts   | Metadata |
  ++--+---+-+--+
  | 59 | bar  | - | 'node2' |  |
  ++--+---+-+--+

  
  3) change "foo" to a named AZ called "az"
  $ nova aggregate-update foo foo az
  Aggregate 58 has been successfully updated.
  ++--+---+-++
  | Id | Name | Availability Zone | Hosts   | Metadata   |
  ++--+---+-++
  | 58 | foo  | az| 'node2' | 'availability_zone=az' |
  ++--+---+-++

  
   end repro 

  The third step should NOT happen because it causes "node2" belong to both 
default AZ and "az" AZ, logically:
  $ nova aggregate-details foo
  ++--+---+-++
  | Id | Name | Availability Zone | Hosts   | Metadata   |
  ++--+---+-++
  | 58 | foo  | az| 'node2' | 'availability_zone=az' |
  ++--+---+-++
  $ nova aggregate-details bar
  ++--+---+-+--+
  | Id | Name | Availability Zone | Hosts   | Metadata |
  ++--+---+-+--+
  | 59 | bar  | - | 'node2' |  |
  ++--+---+-+--+

  
  Interesting thing is, "node2" is actually only belong to the availibility 
zone "az" if we list all the AZs. Thanks to the previous bug fixings:
  $ nova availability-zone-list
  +---++
  | Name  | Status |
  +---++
  | internal  | available  |
  | |- node1  ||
  | | |- nova-conductor   | enabled :-) 2015-12-07T13:45:59.00 |
  | | |- nova-consoleauth | enabled :-) 2015-12-07T13:45:59.00 |
  | | |- nova-scheduler   | enabled :-) 2015-12-07T13:46:02.00 |
  | | |- nova-cert| enabled :-) 2015-12-07T13:46:01.00 |
  | az| available  |
  | |- node2  ||
  | | |- nova-compute | enabled :-) 2015-12-07T13:46:04.00 |
  +---++

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1523506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550098] [NEW] Disk resource consumption is inconsistent between scheduler and resource tracker

2016-02-25 Thread Yingxin
Public bug reported:

The way scheduler consumes disk resources in host state is inconsistent
with RT's way in compute service.

The scheduler consumes "free_disk_mb" in its host state.
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L266

It comes from the min value of "free_disk_gb" and "disk_available_least" in 
ComputeNode object.
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L189-L201

But compute node changes "local_gb_used" instead in consuming resources
from a request.
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L709-L710


There is a inconsistent gap(confirmed) of compute node state between
scheduler and resource tracker, but somehow the compute node will be
updated by virt driver to the consistent status after maybe 10 seconds.

** Affects: nova
     Importance: Undecided
 Assignee: Yingxin (cyx1231st)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Yingxin (cyx1231st)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550098

Title:
  Disk resource consumption is inconsistent between scheduler and
  resource tracker

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The way scheduler consumes disk resources in host state is
  inconsistent with RT's way in compute service.

  The scheduler consumes "free_disk_mb" in its host state.
  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L266

  It comes from the min value of "free_disk_gb" and "disk_available_least" in 
ComputeNode object.
  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L189-L201

  But compute node changes "local_gb_used" instead in consuming
  resources from a request.
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L709-L710


  There is a inconsistent gap(confirmed) of compute node state between
  scheduler and resource tracker, but somehow the compute node will be
  updated by virt driver to the consistent status after maybe 10
  seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1550098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523506] [NEW] hosts within two availability zones

2015-12-07 Thread Yingxin
Public bug reported:

There have been a lot of bug fixes related to this topic, but it still exists 
somehow. Some previous fix-released bugs for example:
https://bugs.launchpad.net/nova/+bug/1200479
https://bugs.launchpad.net/nova/+bug/1196893
https://bugs.launchpad.net/nova/+bug/1277230

The mailing list has already decided not to allow hosts in different AZs
(http://lists.openstack.org/pipermail/openstack-
dev/2014-April/031803.html), but it can still be reproduced by following
3 steps:

 start repro 

1) create two host aggregates "foo", "bar" to the default AZ:
$ nova aggregate-create foo
++--+---+---+--+
| Id | Name | Availability Zone | Hosts | Metadata |
++--+---+---+--+
| 58 | foo  | - |   |  |
++--+---+---+--+
$ nova aggregate-create bar
++--+---+---+--+
| Id | Name | Availability Zone | Hosts | Metadata |
++--+---+---+--+
| 59 | bar  | - |   |  |
++--+---+---+--+


2) assign a host "node2" to both aggregates
$ nova aggregate-add-host foo node2
++--+---+-+--+
| Id | Name | Availability Zone | Hosts   | Metadata |
++--+---+-+--+
| 58 | foo  | - | 'node2' |  |
++--+---+-+--+
$ nova aggregate-add-host bar node2
++--+---+-+--+
| Id | Name | Availability Zone | Hosts   | Metadata |
++--+---+-+--+
| 59 | bar  | - | 'node2' |  |
++--+---+-+--+


3) change "foo" to a named AZ called "az"
$ nova aggregate-update foo foo az
Aggregate 58 has been successfully updated.
++--+---+-++
| Id | Name | Availability Zone | Hosts   | Metadata   |
++--+---+-++
| 58 | foo  | az| 'node2' | 'availability_zone=az' |
++--+---+-++


 end repro 

The third step should NOT happen because it causes "node2" belong to both 
default AZ and "az" AZ, logically:
$ nova aggregate-details foo
++--+---+-++
| Id | Name | Availability Zone | Hosts   | Metadata   |
++--+---+-++
| 58 | foo  | az| 'node2' | 'availability_zone=az' |
++--+---+-++
$ nova aggregate-details bar
++--+---+-+--+
| Id | Name | Availability Zone | Hosts   | Metadata |
++--+---+-+--+
| 59 | bar  | - | 'node2' |  |
++--+---+-+--+


Interesting thing is, "node2" is actually only belong to the availibility zone 
"az" if we list all the AZs. Thanks to the previous bug fixings:
$ nova availability-zone-list
+---++
| Name  | Status |
+---++
| internal  | available  |
| |- node1  ||
| | |- nova-conductor   | enabled :-) 2015-12-07T13:45:59.00 |
| | |- nova-consoleauth | enabled :-) 2015-12-07T13:45:59.00 |
| | |- nova-scheduler   | enabled :-) 2015-12-07T13:46:02.00 |
| | |- nova-cert| enabled :-) 2015-12-07T13:46:01.00 |
| az| available  |
| |- node2  ||
| | |- nova-compute | enabled :-) 2015-12-07T13:46:04.00 |
+---+----+

** Affects: nova
 Importance: Undecided
 Assignee: Yingxin (cyx1231st)
 Status: New


** Tags: scheduler

** Changed in: nova
 Assignee: (unassigned) => Yingxin (cyx1231st)

** Tags added: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523506

Title:
  hosts within two availability zones

Status in OpenStack Compute (nova):
  New

Bug description:
  There have been a lot of bug fixes related to this topic, but it still exists 
somehow. Some previous fix-released bugs for example:
  https://bugs.launchpad.net/nova/+bug/1200479
  https://bugs.launchpad.net/nova/+bug/1

[Yahoo-eng-team] [Bug 1523450] [NEW] Empty-named AZ is accepted using aggregate-set-metadata

2015-12-07 Thread Yingxin
Public bug reported:

An empty-named AZ can be set using the following command if there exists an 
aggregate 'foo':
$nova aggregate-set-metadata foo availability_zone=
++--+---+-+--+
| Id | Name | Availability Zone | Hosts   | Metadata |
++--+---+-+--+
| 55 | foo  |   | 'node2' | 'availability_zone=' |
++--+---+-+--+

This empty-named AZ is meaningless and confusing, because it ISN'T the default 
AZ. For example if we list AZ there will be an empty entry:
$nova availability-zone-list
+---++
| Name  | Status |
+---++
| internal  | available  |
| |- node1  ||
| | |- nova-conductor   | enabled :-) 2015-12-07T08:15:49.00 |
| | |- nova-consoleauth | enabled :-) 2015-12-07T08:15:50.00 |
| | |- nova-scheduler   | enabled :-) 2015-12-07T08:15:50.00 |
| | |- nova-cert| enabled :-) 2015-12-07T08:15:51.00 |
|   | available  |
| |- node2  ||
| | |- nova-compute | enabled :-) 2015-12-07T08:15:49.00 |
| nova  | available  |
| |- node3  ||
| | |- nova-compute | enabled :-) 2015-12-07T08:15:50.00 |
+---++

However, nova scheduler CANNOT distinguish between this empty-named AZ and the 
default AZ, for example:
$nova boot --flavor 42 --image  --availability-zone "" test
The scheduler will treat "" as default AZ, the 'test' instance will be booted 
in either "" or the default "nova" AZ.

** Affects: nova
     Importance: Undecided
 Assignee: Yingxin (cyx1231st)
 Status: New


** Tags: scheduler

** Changed in: nova
 Assignee: (unassigned) => Yingxin (cyx1231st)

** Description changed:

  An empty-named AZ can be set using the following command if there exists an 
aggregate 'foo':
  $nova aggregate-set-metadata foo availability_zone=
  ++--+---+-+--+
  | Id | Name | Availability Zone | Hosts   | Metadata |
  ++--+---+-+--+
  | 55 | foo  |   | 'node2' | 'availability_zone=' |
  ++--+---+-+--+
- 
  
  This empty-named AZ is meaningless and confusing, because it ISN'T the 
default AZ. For example if we list AZ there will be an empty entry:
  $nova availability-zone-list
  +---++
  | Name  | Status |
  +---++
  | internal  | available  |
  | |- node1  ||
  | | |- nova-conductor   | enabled :-) 2015-12-07T08:15:49.00 |
  | | |- nova-consoleauth | enabled :-) 2015-12-07T08:15:50.00 |
  | | |- nova-scheduler   | enabled :-) 2015-12-07T08:15:50.00 |
  | | |- nova-cert| enabled :-) 2015-12-07T08:15:51.00 |
  |   | available  |
  | |- node2  ||
  | | |- nova-compute | enabled :-) 2015-12-07T08:15:49.00 |
  | nova  | available  |
  | |- node3  ||
  | | |- nova-compute | enabled :-) 2015-12-07T08:15:50.00 |
  +---++
  
- 
  However, nova scheduler CANNOT distinguish between this empty-named AZ and 
the default AZ, for example:
- $nova boot --flavor 42 --image cirros-0.3.4-x86_64-uec --availability-zone "" 
test
- The scheduler will treat "" as default AZ, the test instance will be booted 
in either "" or the default "nova" AZ.
+ $nova boot --flavor 42 --image  --availability-zone "" test
+ The scheduler will treat "" as default AZ, the 'test' instance will be booted 
in either "" or the default "nova" AZ.

** Tags added: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523450

Title:
  Empty-named AZ is accepted using aggregate-set-metadata

Status in OpenStack Compute (nova):
  New

Bug description:
  An em

[Yahoo-eng-team] [Bug 1523459] [NEW] Instance can be booted into the "internal" availability zone

2015-12-07 Thread Yingxin
Public bug reported:

Currently, only the nova-compute service has its own availability zone. 
Services such as nova-scheduler, nova-network, and nova-conductor  appear in 
the AZ named "internal". (ref: 
http://docs.openstack.org/openstack-ops/content/scaling.html) For example:
$ nova availability-zone-list
+---++
| Name  | Status |
+---++
| internal  | available  |
| |- node1  ||
| | |- nova-conductor   | enabled :-) 2015-12-07T11:38:09.00 |
| | |- nova-consoleauth | enabled :-) 2015-12-07T11:38:05.00 |
| | |- nova-scheduler   | enabled :-) 2015-12-07T11:38:12.00 |
| | |- nova-cert| enabled :-) 2015-12-07T11:38:07.00 |
| nova  | available  |
| |- node2  ||
| | |- nova-compute | enabled :-) 2015-12-07T11:38:12.00 |
| |- node3  ||
| | |- nova-compute | enabled :-) 2015-12-07T11:38:12.00 |
+---++


However, we can schedule an instance to the "internal" AZ using following 
command:
$ nova boot --flavor 42 --image  --availability-zone "internal" test
Succeed with no error message!

But this "test" instance will be in ERROR status because there is no compute 
node in the "internal" AZ.
$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| eca73033-15cf-402a-b39a-a91e497e3e07 | test | ERROR  | -  | NOSTATE   
  |  |
+--+--+++-+--+

** Affects: nova
 Importance: Undecided
 Assignee: Yingxin (cyx1231st)
 Status: New


** Tags: scheduler

** Changed in: nova
 Assignee: (unassigned) => Yingxin (cyx1231st)

** Tags added: scheduler

** Description changed:

- Currently, only the nova-compute service has its own availability zone. 
Services such as nova-scheduler, nova-network, and nova-conductor  appear in 
the AZ named "internal". 
(http://docs.openstack.org/openstack-ops/content/scaling.html) For example:
+ Currently, only the nova-compute service has its own availability zone. 
Services such as nova-scheduler, nova-network, and nova-conductor  appear in 
the AZ named "internal". (ref: 
http://docs.openstack.org/openstack-ops/content/scaling.html) For example:
  $ nova availability-zone-list
  +---++
  | Name  | Status |
  +---++
  | internal  | available  |
  | |- node1  ||
  | | |- nova-conductor   | enabled :-) 2015-12-07T08:15:49.00 |
  | | |- nova-consoleauth | enabled :-) 2015-12-07T08:15:50.00 |
  | | |- nova-scheduler   | enabled :-) 2015-12-07T08:15:50.00 |
  | | |- nova-cert| enabled :-) 2015-12-07T08:15:51.00 |
  |   | available  |
  | |- node2  ||
  | | |- nova-compute | enabled :-) 2015-12-07T08:15:49.00 |
  | nova  | available  |
  | |- node3  ||
  | | |- nova-compute | enabled :-) 2015-12-07T08:15:50.00 |
  +---++
  
- 
  However, we can schedule an instance to the "internal" AZ using following 
command:
  $ nova boot --flavor 42 --image  --availability-zone "internal" test
  Succeed with no error message!
- 
  
  But this "test" instance will be in ERROR status because there is no compute 
node in the "internal" AZ.
  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | eca73033-15cf-402a-b39a-a91e497e3e07 | test | ERROR  | -  | NOSTATE 
|  |
  
+--+--+++-+--+

** Description changed:

  Currently, only the nova-compute service has its o

[Yahoo-eng-team] [Bug 1493680] [NEW] HostState.metrics is no longer a dict

2015-09-09 Thread Yingxin
Public bug reported:

The commit ae7dab9975bcbe3bb40cb9723b0deaed985b904c changes
'HostState.metrics' from a simple dictionary to
'objects.MonitorMetricList'. But it doesn't change the related behavior
in 'weights.MetricsWeigher'. MetricWeigher still treat
'host_state.metrics' as a dictionary of 'host_manager.MetricItem'. It
causes MetricWeigher failure during weighing.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1493680

Title:
  HostState.metrics is no longer a dict

Status in OpenStack Compute (nova):
  New

Bug description:
  The commit ae7dab9975bcbe3bb40cb9723b0deaed985b904c changes
  'HostState.metrics' from a simple dictionary to
  'objects.MonitorMetricList'. But it doesn't change the related
  behavior in 'weights.MetricsWeigher'. MetricWeigher still treat
  'host_state.metrics' as a dictionary of 'host_manager.MetricItem'. It
  causes MetricWeigher failure during weighing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1493680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp