[Yahoo-eng-team] [Bug 1651989] Re: domain admin token will be treated as cloud admin

2017-01-04 Thread Frode Nordahl
** Also affects: keystone (Juju Charms Collection)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1651989

Title:
  domain admin token will be treated as cloud admin

Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Juju Charms Collection:
  New

Bug description:
  The new capability of is_admin_project is currently only supported for
  projects. However, the existing code for token models will return
  is_admin_project as True if the attribute has not been set. Hence
  admin domain tokens might get interpreted as cloud admin tokens. This
  is currently masked by a bug in our policy samples that do not
  correctly check for is_admin_project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1651989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654152] [NEW] [RFE]Configuration structure a mess

2017-01-04 Thread Thomas Bechtold
Public bug reported:

Neutron's current configuration structure is difficult to understand:

- it is not clear which services need which configuration file(s) during 
startup.
- config files end with .ini instead of .conf (which would be automatically 
recognized by oslo.config)

oslo.config has already support for so called 'default-config-files' and got 
recently (v3.20.0) support for 'default-config-dirs'[1]. With that it would be 
possible to start the different services without any -config-file/--config-dir 
switches if the configuration filenames would be adjusted to the default.
So for eg. neutron-metadata-agent, the configuration file that is automatically 
read is /etc/neutron/neutron-metadata-agent.conf . So metadata_agent.ini needs 
to be renamed to neutron-metadata-agent.conf . The same is true for other 
services.

[1]
http://docs.openstack.org/releasenotes/oslo.config/unreleased.html#id3

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654152

Title:
  [RFE]Configuration structure a mess

Status in neutron:
  New

Bug description:
  Neutron's current configuration structure is difficult to understand:

  - it is not clear which services need which configuration file(s) during 
startup.
  - config files end with .ini instead of .conf (which would be automatically 
recognized by oslo.config)

  oslo.config has already support for so called 'default-config-files' and got 
recently (v3.20.0) support for 'default-config-dirs'[1]. With that it would be 
possible to start the different services without any -config-file/--config-dir 
switches if the configuration filenames would be adjusted to the default.
  So for eg. neutron-metadata-agent, the configuration file that is 
automatically read is /etc/neutron/neutron-metadata-agent.conf . So 
metadata_agent.ini needs to be renamed to neutron-metadata-agent.conf . The 
same is true for other services.

  [1]
  http://docs.openstack.org/releasenotes/oslo.config/unreleased.html#id3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1654152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520159] Re: HTTP response codes should be extracted to constants

2017-01-04 Thread Pooja Jadhav
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Pooja Jadhav (poojajadhav)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1520159

Title:
  HTTP response codes should be extracted to constants

Status in Cinder:
  New
Status in Glance:
  Fix Released
Status in masakari:
  In Progress

Bug description:
  There are several places in the source code where HTTP response codes
  are used as numeric values. These values should be extracted to a
  common file and the numeric values should be replaced by constants.

  For example:
  common/auth.py:186
elif resp.status == 404: --> elif resp.status == HTTP_NOT_FOUND;
  api/middleware/cache.py:261
if method == 'GET' and status_code == 204: --> if method == 'GET' and 
status_code == HTTP_NO_CONTENT:

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1520159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1652494] Re: we should make evacuate target_host not required

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/414743
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4c81bc2993c446f187f9e636229e669d5fc974c7
Submitter: Jenkins
Branch:master

commit 4c81bc2993c446f187f9e636229e669d5fc974c7
Author: zhurong 
Date:   Sun Dec 25 15:57:03 2016 +0800

Make evacuate target_host not required

Nova api can auto select target_host to evacuate,
So we should make target_host required=False.

Change-Id: Iff649b0b27a859fa85a985dfd2575e192806c291
Closes-Bug: #1652494


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1652494

Title:
  we should make evacuate target_host not required

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Nova api can auto select target_host to evacuate,
  So we should make target_host required=False.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1652494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654084] Re: Listing users with non-existent filter returns all users

2017-01-04 Thread Steve Martinelli
According to the HTTP spec the query args are part of the URL, these are
invalid and should result in a 4xx error. However, even the great google
doesn't adhere to that, try the following:

https://www.google.com/#q=search+for+something&invalid=param&more=stuff

With that said, I tried looking up what the API working group has to say
about this:

http://specs.openstack.org/openstack/api-
wg/guidelines/pagination_filter_sort.html

But no luck, let's bounce this bug off of them and see what they have to
say.

** Also affects: openstack-api-wg
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1654084

Title:
  Listing users with non-existent filter returns all users

Status in OpenStack Identity (keystone):
  Confirmed
Status in openstack-api-wg:
  New

Bug description:
  Using devstack all in one and running the get request:

  curl -X GET 'localhost:35357/v3/users/?cake=isgood'

  gives me back the entire user list instead of an error.
  I also ran this query with httpie and this is what the neat
  version of the output looks like:
  http://paste.openstack.org/show/593916/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1654084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560226] Re: No notifications on Neutron tag operations

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/298133
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0daed9ebcab7d46d066fd9d8af576d9aca0a7205
Submitter: Jenkins
Branch:master

commit 0daed9ebcab7d46d066fd9d8af576d9aca0a7205
Author: Hirofumi Ichihara 
Date:   Tue Nov 29 14:26:24 2016 +0900

Add notify for tag operations

When a tag's added to (or removed from) a resource, no notification is
generated indicating that the network (or port or whatever) has changed.
This patch adds notify for tag operations.

Change-Id: I4373b2220f87751a4d89462bef37d04bf9a71fe7
Closes-Bug: #1560226


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560226

Title:
  No notifications on Neutron tag operations

Status in neutron:
  Fix Released
Status in OpenStack Search (Searchlight):
  New

Bug description:
  When a tag's added to (or removed from) a resource, no notification is
  generated indicating that the network (or port or whatever) has
  changed, although tags *are* included in notification and API data for
  those resources. It'd be more consistent if attaching a tag to a
  network generated a notification in the same way as if it were
  renamed.

  My use case is that Searchlight would really like to index tags
  attached to networks, routers, etc since it's a very powerful feature
  but we can't provide up to date information unless a notification's
  sent.

  Tested on neutron mitaka rc1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1560226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653480] Re: Unable to perform role assignments for an ldap user with special characters in the name

2017-01-04 Thread Steve Martinelli
I agree that this is likely a python-openstackclient problem (if it were
to be one).

Can you provide more detail about the environment? and what version of
OSC you are using?

There are some tips documented for working with languages here:
http://docs.openstack.org/developer/python-
openstackclient/configuration.html#locale-and-language-support

Marking as invalid for keystone and incomplete for OSC.

** Changed in: keystone
   Status: New => Invalid

** Changed in: python-openstackclient
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1653480

Title:
  Unable to perform role assignments for an ldap user with special
  characters in the name

Status in OpenStack Identity (keystone):
  Invalid
Status in python-openstackclient:
  Incomplete

Bug description:
  A user with name  VM-JLÃW$ is an ldap user. When I run the openstack
  role add command it fails as follows.

  [root@ip9-114-192-140 ~]# openstack role add admin --project project1—user 
VM-JLÃW$
  Traceback (most recent call last):
  File "/usr/bin/openstack", line 10, in 
  sys.exit(main())
  File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 177, 
in main
  argv = map(lambda arg: arg.decode(encoding), argv)
  File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 177, 
in 
  argv = map(lambda arg: arg.decode(encoding), argv)
  File "/usr/lib64/python2.7/encodings/utf_8.py", line 16, in decode
  return codecs.utf_8_decode(input, errors, True)
  UnicodeDecodeError: 'utf8' codec can't decode byte 0xc3 in position 5: 
invalid continuation byte 

  
  [root@ip9-114-192-140 ~]# openstack user list
  
+-++
  | ID | Name |
  
+-++

  | c297dffba4301deed982828c011c7314ce36f29db82451a911cd4898f3135837 |
  VM-JLÃW$ |


  When I run the command for same user using userid it works.

  openstack role add admin --project project1 --user
  c297dffba4301deed982828c011c7314ce36f29db82451a911cd4898f3135837

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1653480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1634568] Re: [api] Inconsistency between v3 API and keystone token timestamps

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/416372
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=ec4d0551c0cd3af355a9a64bd2b82c34d538552e
Submitter: Jenkins
Branch:master

commit ec4d0551c0cd3af355a9a64bd2b82c34d538552e
Author: Brant Knudson 
Date:   Tue Jan 3 16:51:25 2017 -0600

Correct timestamp format in token responses

The token issue response has timestamps like this:

  "issued_at": "2017-01-03T22:42:55.00Z"
  "expires_at": "2017-01-03T23:42:55.00Z"

Which didn't match the format documented in the API spec (the
response has subsecond precision and Z rather than ±HHMM).

Change-Id: I1deeac1776a7716ee66d187d1c1c7c1f5b02235f
Closes-Bug: 1634568


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1634568

Title:
  [api] Inconsistency between v3 API and keystone token timestamps

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The v3 API spec for tokens documents the format of timestamps[1]. It
  says the format is like "CCYY-MM-DDThh:mm:ss±hh:mm".

  By this, the timestamps returned by keystone should be like
  2016-10-17T15:17:03+00:00. But they actually show up like this:

  V3:
  "issued_at": "2016-10-17T15:17:03.00Z",
  "expires_at": "2016-10-17T16:17:03.00Z",

  V2:
  "issued_at": "2016-10-17T15:17:56.00Z",
  "expires": "2016-10-17T16:17:56Z",

  Tempest has checks that the timestamp ends in Z.

  [1] http://developer.openstack.org/api-ref/identity/v3/?expanded
  =validate-and-show-information-for-token-detail#id19

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1634568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654128] [NEW] l3_agent_scheduler AZLeastRoutersScheduler TypeError

2017-01-04 Thread LIU Yulong
Public bug reported:

This bug was reported for mitaka/liberty release.

There already has a fixed bug here:
https://bugs.launchpad.net/neutron/+bug/1641879

But the original patch which caused this bug was backported to mitaka and 
liberty.
https://review.openstack.org/#/q/9f30df85fe78d830331a43fa29fc2d83708c861d


Traces:
http://paste.openstack.org/show/593939/

How to reproduce:
1. Upgrade neutron to 8.3.0 (current stable/mitaka version)
2. router_scheduler_driver = 
neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654128

Title:
  l3_agent_scheduler AZLeastRoutersScheduler TypeError

Status in neutron:
  New

Bug description:
  This bug was reported for mitaka/liberty release.

  There already has a fixed bug here:
  https://bugs.launchpad.net/neutron/+bug/1641879

  But the original patch which caused this bug was backported to mitaka and 
liberty.
  https://review.openstack.org/#/q/9f30df85fe78d830331a43fa29fc2d83708c861d

  
  Traces:
  http://paste.openstack.org/show/593939/

  How to reproduce:
  1. Upgrade neutron to 8.3.0 (current stable/mitaka version)
  2. router_scheduler_driver = 
neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1654128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654102] Re: Ironic: TypeError: unsupported operand type(s) for *: 'NoneType' and 'int' - during select_destinations()

2017-01-04 Thread Matt Riedemann
** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Importance: Low => Medium

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => Confirmed

** Changed in: nova/newton
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1654102

Title:
  Ironic: TypeError: unsupported operand type(s) for *: 'NoneType' and
  'int' - during select_destinations()

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  This looks like a duplicate of bug 1610679 but that's for the non-
  Ironic scheduler host manager, this is for the Ironic host manager.

  Seen here:

  http://logs.openstack.org/20/407220/2/check/gate-tempest-dsvm-ironic-
  ipa-wholedisk-agent_ipmitool-tinyipa-multinode-ubuntu-xenial-
  nv/0779f94/logs/screen-n-sch.txt.gz#_2017-01-04_19_05_16_398

  2017-01-04 19:05:16.398 20709 DEBUG oslo_concurrency.lockutils 
[req-6ce318a2-53e5-4b80-84a5-f896ab48d627 
tempest-BaremetalMultitenancy-1820943769 
tempest-BaremetalMultitenancy-1820943769] Lock 
"(u'ubuntu-xenial-2-node-rax-ord-6478210', 
u'21da4933-a128-45f6-a765-7e6bc071e0f3')" released by 
"nova.scheduler.host_manager._locked_update" :: held 0.002s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
  2017-01-04 19:05:16.398 20709 DEBUG oslo_concurrency.lockutils 
[req-6ce318a2-53e5-4b80-84a5-f896ab48d627 
tempest-BaremetalMultitenancy-1820943769 
tempest-BaremetalMultitenancy-1820943769] Lock 
"(u'ubuntu-xenial-2-node-rax-ord-6478210-359401', 
u'00d62acd-6f3b-4cb8-9668-12517c84b3b9')" acquired by 
"nova.scheduler.host_manager._locked_update" :: waited 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
  2017-01-04 19:05:16.398 20709 DEBUG nova.scheduler.host_manager 
[req-6ce318a2-53e5-4b80-84a5-f896ab48d627 
tempest-BaremetalMultitenancy-1820943769 
tempest-BaremetalMultitenancy-1820943769] Update host state from compute node: 
ComputeNode(cpu_allocation_ratio=16.0,cpu_info='',created_at=2017-01-04T19:02:54Z,current_workload=None,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=10,free_disk_gb=None,free_ram_mb=None,host='ubuntu-xenial-2-node-rax-ord-6478210-359401',host_ip=10.210.37.31,hypervisor_hostname='00d62acd-6f3b-4cb8-9668-12517c84b3b9',hypervisor_type='ironic',hypervisor_version=1,id=8,local_gb=10,local_gb_used=0,memory_mb=384,memory_mb_used=0,metrics=None,numa_topology=None,pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.0,running_vms=None,service_id=None,stats={cpu_arch='x86_64'},supported_hv_specs=[HVSpec],updated_at=2017-01-04T19:04:51Z,uuid=521cf775-5a16-4111-bf26-c30fb6725716,vcpus=1,vcpus_used=0)
 _locked_update /opt/stack/new/nova/
 nova/scheduler/host_manager.py:168
  2017-01-04 19:05:16.400 20709 DEBUG oslo_concurrency.lockutils 
[req-6ce318a2-53e5-4b80-84a5-f896ab48d627 
tempest-BaremetalMultitenancy-1820943769 
tempest-BaremetalMultitenancy-1820943769] Lock 
"(u'ubuntu-xenial-2-node-rax-ord-6478210-359401', 
u'00d62acd-6f3b-4cb8-9668-12517c84b3b9')" released by 
"nova.scheduler.host_manager._locked_update" :: held 0.002s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server 
[req-6ce318a2-53e5-4b80-84a5-f896ab48d627 
tempest-BaremetalMultitenancy-1820943769 
tempest-BaremetalMultitenancy-1820943769] Exception during message handling
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
222, in dispatch
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
192, in _do_dispatch
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
218, in inner
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server return 
func(*args, **kwargs)
  2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/opt/sta

[Yahoo-eng-team] [Bug 1654107] [NEW] resource tracking fails with Unauthorized from placement API (keystone v3)

2017-01-04 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/65/416765/1/check/gate-tempest-dsvm-neutron-
identity-v3-only-full-ubuntu-xenial-
nv/faf1363/logs/screen-n-cpu.txt.gz#_2017-01-04_23_15_06_674

2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager 
[req-832f822c-02b3-453b-b80b-61ad7f06f401 - -] Error updating resources for 
node ubuntu-xenial-osic-cloud1-disk-6483179.
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager Traceback (most recent 
call last):
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6537, in 
update_available_resource_for_node
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager 
rt.update_available_resource(context, nodename)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 540, in 
update_available_resource
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager return f(*args, 
**kwargs)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 564, in 
_update_available_resource
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager 
self._init_compute_node(context, resources)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 451, in 
_init_compute_node
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager 
self.scheduler_client.update_resource_stats(self.compute_node)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 60, in 
update_resource_stats
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager 
self.reportclient.update_resource_stats(compute_node)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 37, in 
__run_method
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager return 
getattr(self.instance, __name)(*args, **kwargs)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/report.py", line 462, in 
update_resource_stats
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager 
compute_node.hypervisor_hostname)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/report.py", line 282, in 
_ensure_resource_provider
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager rp = 
self._get_resource_provider(uuid)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/report.py", line 47, in wrapper
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager return f(self, *a, 
**k)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/report.py", line 195, in 
_get_resource_provider
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager resp = 
self.get("/resource_providers/%s" % uuid)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/report.py", line 160, in get
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager 
endpoint_filter=self.ks_filter, raise_exc=False)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 710, in 
get
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager return 
self.request(url, 'GET', **kwargs)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/positional/__init__.py", line 101, in 
inner
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager return 
wrapped(*args, **kwargs)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 467, in 
request
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager auth_headers = 
self.get_auth_headers(auth)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", line 770, in 
get_auth_headers
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager return 
auth.get_headers(self, **kwargs)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/plugin.py", line 90, in 
get_headers
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager token = 
self.get_token(session)
2017-01-04 23:15:06.674 7096 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 
90, in get_

[Yahoo-eng-team] [Bug 1654102] [NEW] Ironic: TypeError: unsupported operand type(s) for *: 'NoneType' and 'int' - during select_destinations()

2017-01-04 Thread Matt Riedemann
Public bug reported:

This looks like a duplicate of bug 1610679 but that's for the non-Ironic
scheduler host manager, this is for the Ironic host manager.

Seen here:

http://logs.openstack.org/20/407220/2/check/gate-tempest-dsvm-ironic-
ipa-wholedisk-agent_ipmitool-tinyipa-multinode-ubuntu-xenial-
nv/0779f94/logs/screen-n-sch.txt.gz#_2017-01-04_19_05_16_398

2017-01-04 19:05:16.398 20709 DEBUG oslo_concurrency.lockutils 
[req-6ce318a2-53e5-4b80-84a5-f896ab48d627 
tempest-BaremetalMultitenancy-1820943769 
tempest-BaremetalMultitenancy-1820943769] Lock 
"(u'ubuntu-xenial-2-node-rax-ord-6478210', 
u'21da4933-a128-45f6-a765-7e6bc071e0f3')" released by 
"nova.scheduler.host_manager._locked_update" :: held 0.002s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2017-01-04 19:05:16.398 20709 DEBUG oslo_concurrency.lockutils 
[req-6ce318a2-53e5-4b80-84a5-f896ab48d627 
tempest-BaremetalMultitenancy-1820943769 
tempest-BaremetalMultitenancy-1820943769] Lock 
"(u'ubuntu-xenial-2-node-rax-ord-6478210-359401', 
u'00d62acd-6f3b-4cb8-9668-12517c84b3b9')" acquired by 
"nova.scheduler.host_manager._locked_update" :: waited 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
2017-01-04 19:05:16.398 20709 DEBUG nova.scheduler.host_manager 
[req-6ce318a2-53e5-4b80-84a5-f896ab48d627 
tempest-BaremetalMultitenancy-1820943769 
tempest-BaremetalMultitenancy-1820943769] Update host state from compute node: 
ComputeNode(cpu_allocation_ratio=16.0,cpu_info='',created_at=2017-01-04T19:02:54Z,current_workload=None,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=10,free_disk_gb=None,free_ram_mb=None,host='ubuntu-xenial-2-node-rax-ord-6478210-359401',host_ip=10.210.37.31,hypervisor_hostname='00d62acd-6f3b-4cb8-9668-12517c84b3b9',hypervisor_type='ironic',hypervisor_version=1,id=8,local_gb=10,local_gb_used=0,memory_mb=384,memory_mb_used=0,metrics=None,numa_topology=None,pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.0,running_vms=None,service_id=None,stats={cpu_arch='x86_64'},supported_hv_specs=[HVSpec],updated_at=2017-01-04T19:04:51Z,uuid=521cf775-5a16-4111-bf26-c30fb6725716,vcpus=1,vcpus_used=0)
 _locked_update /opt/stack/new/nova/no
 va/scheduler/host_manager.py:168
2017-01-04 19:05:16.400 20709 DEBUG oslo_concurrency.lockutils 
[req-6ce318a2-53e5-4b80-84a5-f896ab48d627 
tempest-BaremetalMultitenancy-1820943769 
tempest-BaremetalMultitenancy-1820943769] Lock 
"(u'ubuntu-xenial-2-node-rax-ord-6478210-359401', 
u'00d62acd-6f3b-4cb8-9668-12517c84b3b9')" released by 
"nova.scheduler.host_manager._locked_update" :: held 0.002s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server 
[req-6ce318a2-53e5-4b80-84a5-f896ab48d627 
tempest-BaremetalMultitenancy-1820943769 
tempest-BaremetalMultitenancy-1820943769] Exception during message handling
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
222, in dispatch
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
192, in _do_dispatch
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
218, in inner
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server return 
func(*args, **kwargs)
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/scheduler/manager.py", line 84, in select_destinations
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server dests = 
self.driver.select_destinations(ctxt, spec_obj)
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 51, in 
select_destinations
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server 
selected_hosts = self._schedule(context, spec_obj)
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/scheduler/filter_scheduler.py", line 96, in _schedule
2017-01-04 19:05:16.400 20709 ERROR oslo_messaging.rpc.server hosts = 
self._get_all_host_states(elevated)
2017-01-04 19:05:16.400 20709 ERROR oslo

[Yahoo-eng-team] [Bug 1515870] Re: server can not launch while there's a new nova compute node registed failed

2017-01-04 Thread Matt Riedemann
*** This bug is a duplicate of bug 1610679 ***
https://bugs.launchpad.net/bugs/1610679

For the non-Ironic case that this bug was originally reported against,
it might be a duplicate of bug 1610679 which was fixed in Ocata and
backported to Newton:

https://review.openstack.org/#/q/Ia68298a3f01d89bbf302ac734389f7282176c553,n,z

** This bug has been marked a duplicate of bug 1610679
   race conditions between compute and schedule disk report

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515870

Title:
  server can not launch while there's a new nova compute node registed
  failed

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  1. Exact version of Nova/OpenStack you are running:
  kilo 2015.1.0
  2.Relevant log files:
  2015-11-03 16:00:29.990 3568 ERROR oslo_messaging.rpc.dispatcher 
[req-ce8d5d3d-6a79-4827-b472-02940be546bc 60ca5cf0e1bf44b985ee5ceae440fcfc 
b2a5638f40fd43a59a9be1e9c12f7d89 - - -] Exception during message handling: 
unsupported operand type(s) for *: 'NoneType' and 'int'
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 142, in 
inner
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 86, in 
select_destinations
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher 
filter_properties)
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 67, 
in select_destinations
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher 
filter_properties)
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
131, in _schedule
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher hosts = 
self._get_all_host_states(elevated)
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
176, in _get_all_host_states
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher return 
self.host_manager.get_all_host_states(context)
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py", line 561, in 
get_all_host_states
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher 
host_state = self.host_state_cls(host, node, compute=compute)
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py", line 318, in 
host_state_cls
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher return 
HostState(host, node, **kwargs)
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py", line 163, in 
__init__
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher 
self.update_from_compute_node(compute)
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py", line 208, in 
update_from_compute_node
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher 
free_disk_mb = free_gb * 1024
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher TypeError: 
unsupported operand type(s) for *: 'NoneType' and 'int'
  2015-11-03 16:00:29.990 3568 TRACE oslo_messaging.rpc.dispatcher
  2015-11-03 16:00:29.991 3568 ERROR oslo_messaging._drivers.common 
[req-ce8d5d3d-6a79-4827-b472-02940be546bc 60ca5cf0e1bf44b985ee5ceae440fcfc 
b2a5638f40fd43a59a9be1e9c12f7d89 - - -] Returning exc

[Yahoo-eng-team] [Bug 1517770] Re: NULL free_disk_gb causes scheduler failure

2017-01-04 Thread Matt Riedemann
*** This bug is a duplicate of bug 1610679 ***
https://bugs.launchpad.net/bugs/1610679

** This bug is no longer a duplicate of bug 1515870
   server can not launch while there's a new nova compute node registed failed
** This bug has been marked a duplicate of bug 1610679
   race conditions between compute and schedule disk report

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1517770

Title:
  NULL free_disk_gb causes scheduler failure

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  It appears a race exists between nova-scheduler and the compute
  manager when a ComputeNode entry is created for the first time.

  The following log messages were noticed after multiple transient
  failures to create VM on a newly deployed single node system.

  2015-11-03 18:41:27.886 13735 WARNING nova.scheduler.host_manager 
[req-dd2b0758-78a4-4a67-90c8-9586d4d55489 db30a70a389548ed916f52d2f5c25544 
617c3194750f44cfa1e9a747b2ac36f5 - - -] Host zs-zhost1 has more disk space than 
database expected (13119gb > Nonegb)
  2015-11-03 18:41:27.904 13783 WARNING nova.scheduler.utils 
[req-dd2b0758-78a4-4a67-90c8-9586d4d55489 db30a70a389548ed916f52d2f5c25544 
617c3194750f44cfa1e9a747b2ac36f5 - - -] Failed to compute_task_build_instances: 
unsupported operand type(s) for *: 'NoneType' and 'int'
  Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 142, in _dispatch_and_reply
  executor_callback))
  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 186, in _dispatch
  executor_callback)
  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 130, in _do_dispatch
  result = func(ctxt, **new_args)
  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
142, in inner
  return func(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 86, 
in select_destinations
  filter_properties)
  File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", 
line 67, in select_destinations
  filter_properties)
  File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", 
line 131, in _schedule
  hosts = self._get_all_host_states(elevated)
  File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", 
line 176, in _get_all_host_states
  return self.host_manager.get_all_host_states(context)
  File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 
552, in get_all_host_states
  host_state = self.host_state_cls(host, node, compute=compute)
  File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 
309, in host_state_cls
  return HostState(host, node, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 
157, in _init_
  self.update_from_compute_node(compute)
  File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 
202, in update_from_compute_node
  free_disk_mb = free_gb * 1024
  TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
  2015-11-03 18:41:27.907 13783 WARNING nova.scheduler.utils 
[req-dd2b0758-78a4-4a67-90c8-9586d4d55489 db30a70a389548ed916f52d2f5c25544 
617c3194750f44cfa1e9a747b2ac36f5 - - -] [instance: 
bd6bb6a7-e917-4ce7-b207-817144ac7853] Setting instance to ERROR state.

  I believe that during the execution of
  resource_tracker._update_available_resource() for a new node, the
  period between the initial insert of the ComputeNode entry in
  _init_compute_node() and the call to _update() leaves a ComputeNode
  with a NULL free_disk_gb for a small window of time.

  Commit 6aa36ab seems likely to have exposed this more widely.

  Versions (Kilo):
  ii  nova-common  1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - common files
  ii  nova-compute 1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - compute node base
  ii  nova-compute-kvm 1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - compute node (KVM)
  ii  nova-compute-libvirt 1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute - compute node libvirt support
  ii  python-nova  1:2015.1.1-0ubuntu1~cloud2   
 all  OpenStack Compute Python libraries
  ii  python-novaclient1:2.22.0-0ubuntu1~cloud0 
 all  client library for OpenStack Compute API

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1517770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647316] Re: scheduler report client sends allocations with value of zero, violating min_unit

2017-01-04 Thread OpenStack Infra
Fix proposed to branch: stable/newton
Review: https://review.openstack.org/416764

** Changed in: nova/newton
   Status: New => In Progress

** Changed in: nova/newton
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647316

Title:
  scheduler report client sends allocations with value of zero,
  violating min_unit

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress

Bug description:
  
  When a VM boots using non-local disk, it tries to send an allocation of 
'DISK_GB': 0. This violates the default min_unit of 1 and causes an error that 
looks like this:

  [req-858cbed4-c113-45e8-94e3-1d8ee64f9de0 488c2b05a66b441199f4c1dca7accd5b 
3fa5b55ecc154427b636119f0920d252 - default default] Bad inventory
  Traceback (most recent call last):
File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/placement/handlers/allocation.py",
 line 253, in set_allocations
  allocations.create_all()
File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
226, in wrapper
  return fn(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 1050, in create_all
  self._set_allocations(self._context, self.objects)
File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 894, in wrapper
  return fn(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 1011, in _set_allocations
  before_gens = _check_capacity_exceeded(conn, allocs)
File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 921, in _check_capacity_exceeded
  resource_provider=rp_uuid)
  InvalidAllocationConstraintsViolated: Unable to create allocation for 
'DISK_GB' on resource provider 'f9398126-d0e8-4cf8-ae45-9103a88aa13d'. The 
requested amount would violate inventory constraints.

  The causing code is at
  
https://github.com/openstack/nova/blob/474c2ef28234dacc658e9a78762cac66ef7fe334/nova/scheduler/client/report.py#L105

  The correct fix is probably that whenever the value of any resource
  class is zero, don't send that resource class in the dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1650174] Re: api-ref: project_id/tenant_id in request/response body are shown as "path"

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/411170
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=318a6b606b4b39e4749691bd020affcd2533499f
Submitter: Jenkins
Branch:master

commit 318a6b606b4b39e4749691bd020affcd2533499f
Author: Akihiro Motoki 
Date:   Thu Dec 15 17:38:38 2016 +0900

api-ref: project_id in req/resp body should be "body"

project_id and tenant_id field in request/response body are
marked as "path" now. It should be "body".

- "project_id-path" already exists and it can be used
  for "project_id" in URL path.
- "project_id" is now used for body.
- "project_id-body" is now duplicated, so it was removed.
  fwaas.inc is the only user of project_id-body and
   it is updated accordingly.
- quotas.inc is updated to use 'project_id-path'.
  Also project_id and tenant_id in response body of a quotas operation
  have been dropped as they do not exist.

Note that project_id/tenant_id in request body should be marked
as "optional" in most resources but this patch does not touch them
to avoid unnecessary merge conflicts.
They will be fixed in separate patches.

Change-Id: Ic5a4f55b837ee0a51b7186c3342a94c8c00f6c97
Closes-Bug: #1650174


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1650174

Title:
  api-ref: project_id/tenant_id in request/response body are shown as
  "path"

Status in neutron:
  Fix Released

Bug description:
  project_id/tenant_id in request/response body are shown as "path". They 
should be "body".
  The parameter definition itself is wrong and it affects all resources.
  For example, 
http://developer.openstack.org/api-ref/networking/v2/index.html?expanded=list-routers-detail#list-routers

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1650174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654083] [NEW] Invalid Get request with non-existent filter returns all users

2017-01-04 Thread Richard
*** This bug is a duplicate of bug 1654084 ***
https://bugs.launchpad.net/bugs/1654084

Public bug reported:

Using devstack all in one and running the get request:

curl -X GET 'localhost:35357/v3/users/?cake=isgood'

gives me back the entire user list instead of an error.
I also ran this query with httpie and this is what the neat
version of the output looks like:
http://paste.openstack.org/show/593916/

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1654083

Title:
  Invalid Get request with non-existent filter returns all users

Status in OpenStack Identity (keystone):
  New

Bug description:
  Using devstack all in one and running the get request:

  curl -X GET 'localhost:35357/v3/users/?cake=isgood'

  gives me back the entire user list instead of an error.
  I also ran this query with httpie and this is what the neat
  version of the output looks like:
  http://paste.openstack.org/show/593916/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1654083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548774] Re: LBaas V2: operating_status of 'dead' member is always online with Healthmonitor

2017-01-04 Thread Armando Migliaccio
We should reassess whether or not a neutron-lbaas fix is worth
addressing.

** Also affects: octavia
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548774

Title:
  LBaas V2: operating_status of 'dead' member is always online with
  Healthmonitor

Status in neutron:
  Won't Fix
Status in octavia:
  New
Status in senlin:
  New

Bug description:
  Expectation:
  Lbaas v2 healthmonitor will update status of "bad" member just as it behaves 
with v1. However, operating_status of pool members will not change no matter it 
is normal or not.

  ENV:
  My devstack runs in a single node of ubuntu14.04 and uses master branch code, 
mysql and rabbitmq. Tenantname is 'demo', username is 'demo'. I am using 
private-subnet for loadbalancer and member VM. octavia provider.

  Steps to reproduce:
  create a vm from cirros-0.3.4-x86_64-uec image and create one member 
accordingly into loadbalancer pool with healthmonitor. Then curl to get the 
statues of loadbalancer, find member status is online. Then nova stop the 
member mapped VM, curl again and again. Its operating_status of member keeps 
'online' instead of 'error'. 

  Below comes the curl response. No difference before and after pool
  member VM turns into SHUTOFF since no status change happens ever.

  {"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools":
  [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor":
  {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name":
  "", "provisioning_status": "ACTIVE"}, "members": [{"name": "",
  "provisioning_status": "ACTIVE", "address": "10.0.0.13",
  "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0",
  "operating_status": "ONLINE"}], "id": "eaef79a9-d5e0-4582-b45b-
  cd460beea4fc", "operating_status": "ONLINE"}], "name": "", "id":
  "4e3a7d98-3ab9-4a39-b915-a9651fcada65", "operating_status": "ONLINE",
  "provisioning_status": "ACTIVE"}], "id":
  "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE",
  "provisioning_status": "ACTIVE"}}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1654084] [NEW] Listing users with non-existent filter returns all users

2017-01-04 Thread Richard
Public bug reported:

Using devstack all in one and running the get request:

curl -X GET 'localhost:35357/v3/users/?cake=isgood'

gives me back the entire user list instead of an error.
I also ran this query with httpie and this is what the neat
version of the output looks like:
http://paste.openstack.org/show/593916/

** Affects: keystone
 Importance: Low
 Status: Confirmed


** Tags: user-experience

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1654084

Title:
  Listing users with non-existent filter returns all users

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  Using devstack all in one and running the get request:

  curl -X GET 'localhost:35357/v3/users/?cake=isgood'

  gives me back the entire user list instead of an error.
  I also ran this query with httpie and this is what the neat
  version of the output looks like:
  http://paste.openstack.org/show/593916/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1654084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632247] Re: nova list --all-tenants fetches all instance faults but uses only latest

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/409943
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=176c5c8a65efbde01020bc69a97bb7d05720589e
Submitter: Jenkins
Branch:master

commit 176c5c8a65efbde01020bc69a97bb7d05720589e
Author: Jay Pipes 
Date:   Mon Dec 12 16:39:23 2016 -0500

Only return latest instance fault for instances

This patch addresses slowness that can occur when doing a list servers
API operation when there are many thousands of records in the
instance_faults table.

Previously, in the Instance.fill_faults() method, we were getting all
instance fault records for a set of instances having one of a set of
supplied instance UUIDs and then iterating over those faults and
returning a dict of instance UUID to the first fault returned (which
happened to be the latest fault because of ordering the SQL query by
created_at).

This patch adds a new InstanceFaultList.get_latest_by_instance_uuids()
method that does some SQL-fu to only return the latest fault records for
each instance being inspected.

Closes-Bug: #1632247

Co-Authored-By: Roman Podoliaka 
Change-Id: I8f2227b3969791ebb2d04d74a316b9d97a4b1571


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632247

Title:
  nova list --all-tenants fetches all instance faults but uses only
  latest

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  There are 15 instances. I excute the command: nova --debug list --all-tenants,
  But it took more than 40 seconds. I read the nova api code, it sends a Get 
request
  and read the instance_faults table for detail information. The 
instance_faults table has
   more than tens of thousands of records. Each instance has many records.

 GET /v2/433288e1244046a9bd306658b732dded/servers/detail

  I think instance_faults table needs to be optimized. A large number of 
records in the instance_faults
  table are useless, only leaving the last three records should be on it, 
others can be deleted.

  There are any other optimization program?

  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue you noticed:
  * I excute the command: nova --debug list --all-tenants

  A list of openstack client commands (with correct argument value)
  $ nova --debug list --all-tenants

  
  Expected result
  ===
  I expect to be back soon within 10 seconds

  Actual result
  =
  But the query took more than 40 seconds.

  Environment
  ===
  1. version 
  Mitaka

  2. Which hypervisor did you use?
      Libvirt + KVM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648314] Re: create/update resource_class APIs raises HTTP 500 Internal Server Error when name is greater than 255 characters

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/409002
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=874e666e7a53ad59b820791491584ba0f8276fd1
Submitter: Jenkins
Branch:master

commit 874e666e7a53ad59b820791491584ba0f8276fd1
Author: bhagyashris 
Date:   Wed Dec 7 18:28:48 2016 +0530

Return 400 when name is more than 255 characters

APIs listed below are returning 500 error if you pass name more than
255 characters.
1. create resource_classes
2. update resource_classes

Added maxLength check in schema to ensure name should not be more than
255 characters.

Closes-Bug: #1648314
Change-Id: I4ae54f3061fe43d87a656088db1d2ae454eb8237


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648314

Title:
  create/update resource_class APIs raises HTTP 500 Internal Server
  Error when name is greater than 255 characters

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  create/update resource_class APIs raises HTTP 500 Internal Server Error if 
you pass name
  greater than 255 characters

  Steps to reproduce:

  create resource class by passing the name parameter greater than 255
  character

  $ curl -g -i -X POST http://10.232.48.200/placement/resource_classes
  -H "OpenStack-API-Version: placement 1.2" -H "Content-Type:
  application/json" -H "X-Auth-Token: cc6e31c316a24820a6d9257cdb9d802f"
  -d '{"name":
  
"CUSTOM_TEST
 
"}'

  Output:

  HTTP/1.1 500 Internal Server Error
  Date: Wed, 07 Dec 2016 08:02:05 GMT
  Server: Apache/2.4.7 (Ubuntu)
  x-openstack-request-id: req-a836bf46-e51d-4bfe-814d-59c2aa62fd82
  Content-Length: 128
  Connection: close
  Content-Type: application/json; charset=UTF-8

  {"computeFault": {"message": "The server has either erred or is
  incapable of performing the requested operation.", "code": 500}}

  Error Logs:

  2016-12-08 12:04:13.122 TRACE nova.api.openstack Traceback (most recent call 
last):
  2016-12-08 12:04:13.122 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/
  __init__.py", line 88, in __call__
  2016-12-08 12:04:13.122 TRACE nova.api.openstack return 
req.get_response(self.applicatio
  n)
  2016-12-08 12:04:13.122 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packa
  ges/webob/request.py", line 1299, in send
  2016-12-08 12:04:13.122 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2016-12-08 12:04:13.122 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packa
  ges/webob/request.py", line 1263, in call_application
  2016-12-08 12:04:13.122 TRACE nova.api.openstack app_iter = 
application(self.environ, st
  art_response)
  2016-12-08 12:04:13.122 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packa
  ges/webob/dec.py", line 130, in __call__
  2016-12-08 12:04:13.122 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **sel
  f.kwargs)
  2016-12-08 12:04:13.122 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packa
  ges/webob/dec.py", line 195, in call_func
  2016-12-08 12:04:13.122 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
  2016-12-08 12:04:13.122 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/
  placement/microversion.py", line 104, in __call__
  2016-12-08 12:04:13.122 TRACE nova.api.openstack response = 
req.get_response(self.applic
  ation)
  2016-12-08 12:04:13.122 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packa
  ges/webob/request.py", line 1299, in send
  2016-12-08 12:04:13.122 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2016-12-08 12:04:13.122 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packa
  ges/webo

[Yahoo-eng-team] [Bug 1589993] Re: Murano cannot deploy with federated user

2017-01-04 Thread Serg Melikyan
** No longer affects: murano

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1589993

Title:
  Murano cannot deploy with federated user

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Deploying with federated user throws an exception in murano-engine
  with:

  Exception Could not find role: 9fe2ff9ee4384b1894a90878d3e92bab (HTTP
  404)

  The mentioned role is _member_

  The full trace:

  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/murano/common/engine.py", line 159, in execute
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine 
self._create_trust()
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/murano/common/engine.py", line 282, in 
_create_trust
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine 
self._session.token, self._session.project_id)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/murano/common/auth_utils.py", line 98, in 
create_trust
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine project=project)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/v3/contrib/trusts.py", line 
75, in create
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 75, in func
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine return f(*args, 
**new_kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 339, in create
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine self.key)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 171, in _post
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine resp, body = 
self.client.post(url, body=body, **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 179, in post
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine return 
self.request(url, 'POST', **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 331, in 
request
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine resp = 
super(LegacyJsonAdapter, self).request(*args, **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 98, in request
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine return 
self.session.request(url, method, **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/positional/__init__.py", line 94, in inner
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine return 
func(*args, **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line 420, in 
request
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine raise 
exceptions.from_response(resp, method, url)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine NotFound: Could not 
find role: 9fe2ff9ee4384b1894a90878d3e92bab (HTTP 404) (Request-ID: 
req-760d033b-e456-4915-b197-e450d4c8a405)

  
  Seems something wrong with creating a trust.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1589993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647316] Re: scheduler report client sends allocations with value of zero, violating min_unit

2017-01-04 Thread Matt Riedemann
Marking this high as it's causing issues in the ceph jobs:

http://logs.openstack.org/14/414214/12/check/gate-tempest-dsvm-full-
devstack-plugin-ceph-ubuntu-
xenial/511c30e/logs/screen-n-cpu.txt.gz#_2017-01-03_23_17_44_657

2017-01-03 23:17:44.657 4254 DEBUG nova.scheduler.client.report 
[req-eced32b1-1b58-446a-bf57-02280b9f2b4d 
tempest-TenantUsagesTestJSON-1895317078 
tempest-TenantUsagesTestJSON-1895317078] [instance: 
f83871cb-c77e-4c99-833f-186db97b38b1] Sending allocation for instance 
{'allocations': [{'resource_provider': {'uuid': 
'f8ce7899-0b48-4294-8df2-057ac325f4d2'}, 'resources': {'MEMORY_MB': 64, 'VCPU': 
1, 'DISK_GB': 0}}]} _allocate_for_instance 
/opt/stack/new/nova/nova/scheduler/client/report.py:505
2017-01-03 23:17:44.750 4254 WARNING nova.scheduler.client.report 
[req-eced32b1-1b58-446a-bf57-02280b9f2b4d 
tempest-TenantUsagesTestJSON-1895317078 
tempest-TenantUsagesTestJSON-1895317078] Unable to submit allocation for 
instance f83871cb-c77e-4c99-833f-186db97b38b1 (409 409 Conflict

There was a conflict when trying to complete your request.

 Unable to allocate inventory: Unable to create allocation for 'DISK_GB'
on resource provider 'f8ce7899-0b48-4294-8df2-057ac325f4d2'. The
requested amount would violate inventory constraints.  )

** Changed in: nova
   Importance: Medium => High

** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647316

Title:
  scheduler report client sends allocations with value of zero,
  violating min_unit

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  
  When a VM boots using non-local disk, it tries to send an allocation of 
'DISK_GB': 0. This violates the default min_unit of 1 and causes an error that 
looks like this:

  [req-858cbed4-c113-45e8-94e3-1d8ee64f9de0 488c2b05a66b441199f4c1dca7accd5b 
3fa5b55ecc154427b636119f0920d252 - default default] Bad inventory
  Traceback (most recent call last):
File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/placement/handlers/allocation.py",
 line 253, in set_allocations
  allocations.create_all()
File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
226, in wrapper
  return fn(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 1050, in create_all
  self._set_allocations(self._context, self.objects)
File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 894, in wrapper
  return fn(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 1011, in _set_allocations
  before_gens = _check_capacity_exceeded(conn, allocs)
File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 921, in _check_capacity_exceeded
  resource_provider=rp_uuid)
  InvalidAllocationConstraintsViolated: Unable to create allocation for 
'DISK_GB' on resource provider 'f9398126-d0e8-4cf8-ae45-9103a88aa13d'. The 
requested amount would violate inventory constraints.

  The causing code is at
  
https://github.com/openstack/nova/blob/474c2ef28234dacc658e9a78762cac66ef7fe334/nova/scheduler/client/report.py#L105

  The correct fix is probably that whenever the value of any resource
  class is zero, don't send that resource class in the dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1651678] Re: boot server request randomly hanging at n-cpu side, and didn't get to Ironic

2017-01-04 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Importance: Undecided => Medium

** Changed in: nova/newton
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1651678

Title:
  boot server request randomly hanging at n-cpu side, and didn't get to
  Ironic

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  Ironic gate jobs are randomly timing out during last few weeks:

  
  An example is: 
http://logs.openstack.org/46/327046/36/check/gate-tempest-dsvm-ironic-ipa-partition-pxe_ipmitool-tinyipa-ubuntu-xenial/48db3ea/console.html

  2016-12-20 23:30:24.418214 | Traceback (most recent call last):
  2016-12-20 23:30:24.418231 |   File "tempest/test.py", line 99, in wrapper
  2016-12-20 23:30:24.418248 | return f(self, *func_args, **func_kwargs)
  2016-12-20 23:30:24.418296 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/ironic_tempest_plugin/tests/scenario/test_baremetal_basic_ops.py",
 line 111, in test_baremetal_server_ops
  2016-12-20 23:30:24.418316 | self.instance, self.node = 
self.boot_instance()
  2016-12-20 23:30:24.418361 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/ironic_tempest_plugin/tests/scenario/baremetal_manager.py",
 line 173, in boot_instance
  2016-12-20 23:30:24.418375 | self.wait_node(instance['id'])
  2016-12-20 23:30:24.418417 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/ironic_tempest_plugin/tests/scenario/baremetal_manager.py",
 line 117, in wait_node
  2016-12-20 23:30:24.418441 | raise lib_exc.TimeoutException(msg)
  2016-12-20 23:30:24.418464 | tempest.lib.exceptions.TimeoutException: 
Request timed out
  2016-12-20 23:30:24.418494 | Details: Timed out waiting to get Ironic 
node by instance id 50e23a00-5b92-49b7-8dd0-5b8715ba7e26

  Nova compute seems stuck at "_do_build_and_run_instance
  /opt/stack/new/nova/nova/compute/manager.py:1754"

  2016-12-21 13:24:24.307 21735 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received message with unique_id: 3b9dab54da604a8cadc6c854588a1a5d __call__ 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:196
  2016-12-21 13:24:24.312 21735 DEBUG oslo_concurrency.lockutils 
[req-7b291e0c-c5b3-4a8a-b4db-e7cef3150b03 tempest-BaremetalBasicOps-1775111554 
tempest-BaremetalBasicOps-1775111554] Lock 
"6376a75b-2970-42f5-9f1b-b34db22a23e4" acquired by 
"nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
  2016-12-21 13:24:24.313 21735 DEBUG oslo_messaging._drivers.amqpdriver 
[req-7b291e0c-c5b3-4a8a-b4db-e7cef3150b03 tempest-BaremetalBasicOps-1775111554 
tempest-BaremetalBasicOps-1775111554] CALL msg_id: 
92cc73436d164feab727c5b7c81ec179 exchange 'nova' topic 'conductor' _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:442
  2016-12-21 13:24:24.326 21735 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: 92cc73436d164feab727c5b7c81ec179 __call__ 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:299
  2016-12-21 13:24:24.327 21735 DEBUG nova.compute.manager 
[req-7b291e0c-c5b3-4a8a-b4db-e7cef3150b03 tempest-BaremetalBasicOps-1775111554 
tempest-BaremetalBasicOps-1775111554] [instance: 
6376a75b-2970-42f5-9f1b-b34db22a23e4] Starting instance... 
_do_build_and_run_instance /opt/stack/new/nova/nova/compute/manager.py:1754
  2016-12-21 13:24:24.330 21735 DEBUG oslo_messaging._drivers.amqpdriver 
[req-7b291e0c-c5b3-4a8a-b4db-e7cef3150b03 tempest-BaremetalBasicOps-1775111554 
tempest-BaremetalBasicOps-1775111554] CALL msg_id: 
15898ce761a143c690ea51c6af5d4f23 exchange 'nova' topic 'conductor' _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:442
  2016-12-21 13:24:24.367 21735 DEBUG nova.compute.resource_tracker 
[req-f3cfc8fa-df45-4da4-adf2-83688458fa16 - -] Compute_service record updated 
for 
ubuntu-xenial-osic-cloud1-s3500-6327285:039bbc98-5123-470c-8e09-74e8f35a1391 
_update_available_resource 
/opt/stack/new/nova/nova/compute/resource_tracker.py:601
  2016-12-21 13:24:24.367 21735 DEBUG oslo_concurrency.lockutils 
[req-f3cfc8fa-df45-4da4-adf2-83688458fa16 - -] Lock "compute_resources" 
released by "nova.compute.resource_tracker._update_available_resource" :: held 
6.935s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282

  full log available: http://logs.openstack.org/39/404239/14/check/gate-
  tempest-dsvm-ironic-ipa-wholedisk-pxe_snmp-tinyipa-ubuntu-xenial-
  nv/8f98498/logs/screen-n-cpu.txt.gz#_2016-12-2

[Yahoo-eng-team] [Bug 1653960] Re: Modal header should default to page_header

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/416552
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=83e3ff88ec2f3b776c282bdb73fdf23cb2ab3468
Submitter: Jenkins
Branch:master

commit 83e3ff88ec2f3b776c282bdb73fdf23cb2ab3468
Author: Rob Cresswell 
Date:   Wed Jan 4 13:31:36 2017 +

Remove duplicated modal_header statements

Across the codebase we've been using modal_header values that are
identical (or very close to) the page_title values. This patch sets the
modal_header to default to the page_title value, and cleans up a few
inconsistencies. Also deleted a couple of redundant submit_label values
that were identical to the default ("Submit").

Change-Id: I88815c3801c29b3fbc41e0cb426a50653255595f
Closes-Bug: 1653960


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1653960

Title:
  Modal header should default to page_header

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Many modals have a modal_header value that is identical to
  page_header; we could just default to page_header and remove the
  duplication.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1653960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639930] Re: initramfs network configuration ignored if only ip6= on kernel command line

2017-01-04 Thread LaMont Jones
No maas changes were required here, since it always specifies both ip=
and ip6=

** Changed in: maas
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1639930

Title:
  initramfs network configuration ignored if only ip6= on kernel command
  line

Status in cloud-init:
  Fix Released
Status in MAAS:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Yakkety:
  In Progress

Bug description:
  === Begin SRU Template ===
  [Impact]
  On a system booted with both ip6= and ip= on the kernel command line
  cloud-init will raise an exception and fail to process user-data and
  have its normal affect on boot.

  That is because cloud-init previously raised an exception when more
  than one file in /run/net*.conf declared the same DEVICE.  Changes to
  isc-dhcp and initramfs-tools have changed their behavior and cloud-init
  has to adjust to allow DEVICE6= and DEVICE= in separate files.

  [Test Case]
  Boot a system on a network with both ipv4 and ipv6 dhcp servers,
  and pass kernel command line with:
    ip=dhcp ip6=dhcp

  [Regression Potential]
  Regression seems unlikely as this is relaxing a check.  Where previously
  an exception would have been raised, cloud-init will now go on.

  So it seems most likely, something that didn't work before (due to raised
  exception) would now still not work, but with failures.  That is not
  expected, but that would likely be where regressions were found.

  === End SRU Template ===

  In changes made under bug 1621615 (specifically a1cdebdea), we now
  expect that there may be a 'ip6=' argument on the kernel command line.
  The changes made did not test the case where there is 'ip6=' and no
  'ip='.

  The code currently will return with no network configuration found if
  there is only ip6=...

  Related bugs:
   * bug 1621615: network not configured when ipv6 netbooted into cloud-init
   * bug 1621507: initramfs-tools configure_networking() fails to dhcp IPv6 
addresses
   * bug 1635716: Can't bring up a machine on a dual network (ipv4 and ipv6)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1639930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632109] Re: UX: Alert message in "Create Network" Modal does not have col-* class

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/398328
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=46c2ad601ce20827b02c3e0c4a1ad4f1400a1f0b
Submitter: Jenkins
Branch:master

commit 46c2ad601ce20827b02c3e0c4a1ad4f1400a1f0b
Author: braveliuchina 
Date:   Wed Nov 16 20:20:41 2016 +0800

Add col-sm-12 to network modal error

This patch adds a missing bootstrap col class to the errors
that show in the create/update network and subnet modals.

Change-Id: I82460db791c7da579f0f0379ea2e44599e000d0e
Closes-Bug: 1632109


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1632109

Title:
  UX: Alert message in "Create Network" Modal does not have col-* class

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  How to reproduce:
  1. Go to Project->Networks
  2. Click on "Create Network" button.
  3. Do not fill in the form
  4. Go to tab "Subnet" then click on tab "Subnet details"
  5. See how the alert message fills the 100% of the width of the modal, when 
it should have margin both left and right.

  The alert box should be inside a col-*-12 class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1632109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577753] Re: Cloud-init fails of stage init

2017-01-04 Thread Narinder Gupta
** Changed in: opnfv
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1577753

Title:
  Cloud-init fails of stage init

Status in cloud-init:
  New
Status in OPNFV:
  Invalid

Bug description:
  Cloud-init 0.77 on Ubuntu 16.04 
  seems not to be able to connect to the OpenStack Metadata service and fails, 
  although the Metadata service is available itself, see log here:

  root@ubuntu:/etc/network# cloud-init  --debug init
  2016-05-03 12:25:04,855 - handlers.py[DEBUG]: start: init-network: searching 
for network datasources
  2016-05-03 12:25:04,855 - util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  2016-05-03 12:25:04,855 - util.py[DEBUG]: Read 16 bytes from /proc/uptime
  2016-05-03 12:25:04,856 - util.py[DEBUG]: Reading from 
/var/lib/cloud/data/status.json (quiet=False)
  2016-05-03 12:25:04,856 - util.py[DEBUG]: Read 548 bytes from 
/var/lib/cloud/data/status.json
  2016-05-03 12:25:04,857 - util.py[DEBUG]: Creating symbolic link from 
'/run/cloud-init/status.json' => '../../var/lib/cloud/data/status.json'
  2016-05-03 12:25:04,858 - util.py[DEBUG]: Attempting to remove 
/run/cloud-init/status.json
  2016-05-03 12:25:04,858 - util.py[DEBUG]: Running command 
['systemd-detect-virt', '--quiet', '--container'] with allowed return codes [0] 
(shell=False, capture=True)
  2016-05-03 12:25:04,861 - util.py[DEBUG]: Running command 
['running-in-container'] with allowed return codes [0] (shell=False, 
capture=True)
  2016-05-03 12:25:04,862 - util.py[DEBUG]: Running command 
['lxc-is-container'] with allowed return codes [0] (shell=False, capture=True)
  2016-05-03 12:25:04,864 - util.py[DEBUG]: Reading from /proc/1/environ 
(quiet=False)
  2016-05-03 12:25:04,864 - util.py[DEBUG]: Read 110 bytes from /proc/1/environ
  2016-05-03 12:25:04,864 - util.py[DEBUG]: Reading from /proc/self/status 
(quiet=False)
  2016-05-03 12:25:04,865 - util.py[DEBUG]: Read 896 bytes from 
/proc/self/status
  2016-05-03 12:25:04,865 - util.py[DEBUG]: Reading from /proc/cmdline 
(quiet=False)
  2016-05-03 12:25:04,865 - util.py[DEBUG]: Read 64 bytes from /proc/cmdline
  2016-05-03 12:25:04,866 - util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  2016-05-03 12:25:04,866 - util.py[DEBUG]: Read 16 bytes from /proc/uptime
  2016-05-03 12:25:04,866 - templater.py[WARNING]: Cheetah not available as the 
default renderer for unknown template, reverting to the basic renderer.
  2016-05-03 12:25:04,867 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg 
(quiet=False)
  2016-05-03 12:25:04,867 - util.py[DEBUG]: Read 3011 bytes from 
/etc/cloud/cloud.cfg
  2016-05-03 12:25:04,867 - util.py[DEBUG]: Attempting to load yaml from string 
of length 3011 with allowed root types (,)
  2016-05-03 12:25:04,887 - util.py[DEBUG]: Reading from 
/etc/cloud/cloud.cfg.d/90_dpkg.cfg (quiet=False)
  2016-05-03 12:25:04,887 - util.py[DEBUG]: Read 197 bytes from 
/etc/cloud/cloud.cfg.d/90_dpkg.cfg
  2016-05-03 12:25:04,887 - util.py[DEBUG]: Attempting to load yaml from string 
of length 197 with allowed root types (,)
  2016-05-03 12:25:04,889 - util.py[DEBUG]: Reading from 
/etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False)
  2016-05-03 12:25:04,890 - util.py[DEBUG]: Read 1910 bytes from 
/etc/cloud/cloud.cfg.d/05_logging.cfg
  2016-05-03 12:25:04,890 - util.py[DEBUG]: Attempting to load yaml from string 
of length 1910 with allowed root types (,)
  2016-05-03 12:25:04,897 - cloud-init[DEBUG]: Closing stdin
  2016-05-03 12:25:04,898 - util.py[DEBUG]: Redirecting <_io.TextIOWrapper 
name='' mode='w' encoding='UTF-8'> to | tee -a 
/var/log/cloud-init-output.log
  2016-05-03 12:25:04,899 - util.py[DEBUG]: Redirecting <_io.TextIOWrapper 
name='' mode='w' encoding='UTF-8'> to | tee -a 
/var/log/cloud-init-output.log
  2016-05-03 12:25:04,900 - cloud-init[DEBUG]: Logging being reset, this logger 
may no longer be active shortly
  Cloud-init v. 0.7.7 running 'init' at Tue, 03 May 2016 12:25:04 +. Up 
4339.24 seconds.
  ci-info: ++Net device 
info+++
  ci-info: 
++--+--+---+---+---+
  ci-info: | Device |  Up  |   Address|  Mask | 
Scope | Hw-Address|
  ci-info: 
++--+--+---+---+---+
  ci-info: |   lo   | True |  127.0.0.1   |   255.0.0.0   |   . 
  | . |
  ci-info: |   lo   | True |   ::1/128|   .   |  
host | . |
  ci-info: | ens32  | True | 192.168.0.15 | 255.255.255.0 |   . 
  | fa:16:3e:40:92:3f |
  ci-info: | ens32  | True | fe80::f816:3eff:fe40:923f/64 |   .   |  
link | fa:16:3e:40:92:3f |
  ci-info: 
++--+--+-

[Yahoo-eng-team] [Bug 1650466] Re: Remove iptables nat and mangle rules for security group

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/411699
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=22352f5d4c595e373bb73c8bc590e6d3e621dac0
Submitter: Jenkins
Branch:master

commit 22352f5d4c595e373bb73c8bc590e6d3e621dac0
Author: Jesse 
Date:   Fri Dec 16 15:13:13 2016 +0800

Remove iptables nat and mangle rules for security group

There is no need to add iptables nat and mangle rules for security
group, these rules will slow down network performance especially
when using 6wind Virtual Accelerator.

Change-Id: I1d5748394665535d114e8d942a68d5bd43927058
Closes-Bug: #1650466


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1650466

Title:
  Remove iptables nat and mangle rules for security group

Status in neutron:
  Fix Released

Bug description:
  It seems there is no need to add iptables nat and mangle rules for
  security group, these rules will slow down network performance
  especially when using 6wind Virtual Accelerator.

  When we enable security group, the OVSHybridIptablesFirewallDriver or
  IptablesFirewallDriver will set rules in iptables nat, mangle table.

  These rules are useless to security group, and these rule will consume CPU 
usage.
  When we using 6wind Virtual Accelerator on compute nodes. these rules in nat 
and mangle table will dramatically slow down the network performance.
  So We can remove these rules.

  The rules in iptables nat:
  [root@node-4 ~]# iptables -t nat -nvL
  Chain PREROUTING (policy ACCEPT 42 packets, 2520 bytes)
   pkts bytes target prot opt in out source   
destination 
 42  2520 neutron-openvswi-PREROUTING  all  --  *  *   0.0.0.0/0
0.0.0.0/0   

  Chain INPUT (policy ACCEPT 42 packets, 2520 bytes)
   pkts bytes target prot opt in out source   
destination 

  Chain OUTPUT (policy ACCEPT 3 packets, 180 bytes)
   pkts bytes target prot opt in out source   
destination 
  3   180 neutron-openvswi-OUTPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0   

  Chain POSTROUTING (policy ACCEPT 3 packets, 180 bytes)
   pkts bytes target prot opt in out source   
destination 
  3   180 neutron-openvswi-POSTROUTING  all  --  *  *   0.0.0.0/0   
 0.0.0.0/0   
  3   180 neutron-postrouting-bottom  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0   

  Chain neutron-openvswi-OUTPUT (1 references)
   pkts bytes target prot opt in out source   
destination 

  Chain neutron-openvswi-POSTROUTING (1 references)
   pkts bytes target prot opt in out source   
destination 

  Chain neutron-openvswi-PREROUTING (1 references)
   pkts bytes target prot opt in out source   
destination 

  Chain neutron-openvswi-float-snat (1 references)
   pkts bytes target prot opt in out source   
destination 

  Chain neutron-openvswi-snat (1 references)
   pkts bytes target prot opt in out source   
destination 
  3   180 neutron-openvswi-float-snat  all  --  *  *   0.0.0.0/0
0.0.0.0/0   

  Chain neutron-postrouting-bottom (1 references)
   pkts bytes target prot opt in out source   
destination 
  3   180 neutron-openvswi-snat  all  --  *  *   0.0.0.0/0  
  0.0.0.0/0/* Perform source NAT on outgoing traffic. */

  The rules in mangle table:
  [root@node-4 ~]# iptables -t mangle -nvL
  Chain PREROUTING (policy ACCEPT 10485 packets, 1130K bytes)
   pkts bytes target prot opt in out source   
destination 
  10485 1130K neutron-openvswi-PREROUTING  all  --  *  *   0.0.0.0/0
0.0.0.0/0   

  Chain INPUT (policy ACCEPT 10473 packets, 1127K bytes)
   pkts bytes target prot opt in out source   
destination 
  10473 1127K neutron-openvswi-INPUT  all  --  *  *   0.0.0.0/0 
   0.0.0.0/0   

  Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
   pkts bytes target prot opt in out source   
destination 
  0 0 neutron-openvswi-FORWARD  all  --  *  *   0.0.0.0/0   
 0.0.0.0/0   

  Chain OUTPUT (policy ACCEPT 11083 packets, 1416K bytes)
   pkts bytes target prot opt in out source   
destination 
  11083 1416K neutron-openvswi-OUTPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0   

  Chain POSTROUTING (policy ACCEPT 11083 packets, 1416K bytes)
   pkts bytes target prot opt in

[Yahoo-eng-team] [Bug 1651678] Re: boot server request randomly hanging at n-cpu side, and didn't get to Ironic

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/414214
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3c217acb9c55d647ca362320d697e80d7cfa5ceb
Submitter: Jenkins
Branch:master

commit 3c217acb9c55d647ca362320d697e80d7cfa5ceb
Author: Jay Pipes 
Date:   Thu Dec 22 11:09:15 2016 -0500

placement: Do not save 0-valued inventory

Ironic nodes that are not available or operable have 0 values for vcpus,
memory_mb, and local_gb in the returned dict from the Ironic virt driver's
get_available_resource() call. Don't try to save these 0 values in the
placement API inventory records, since the placement REST API will return an
error. Instead, attempt to delete any inventory records for that Ironic node
resource provider by PUT'ing an empty set of inventory records to the 
placement
API.

Closes-bug: #1651678

Change-Id: I10b22606f704abcb970939fb2cd77f026d4d6322


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1651678

Title:
  boot server request randomly hanging at n-cpu side, and didn't get to
  Ironic

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Ironic gate jobs are randomly timing out during last few weeks:

  
  An example is: 
http://logs.openstack.org/46/327046/36/check/gate-tempest-dsvm-ironic-ipa-partition-pxe_ipmitool-tinyipa-ubuntu-xenial/48db3ea/console.html

  2016-12-20 23:30:24.418214 | Traceback (most recent call last):
  2016-12-20 23:30:24.418231 |   File "tempest/test.py", line 99, in wrapper
  2016-12-20 23:30:24.418248 | return f(self, *func_args, **func_kwargs)
  2016-12-20 23:30:24.418296 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/ironic_tempest_plugin/tests/scenario/test_baremetal_basic_ops.py",
 line 111, in test_baremetal_server_ops
  2016-12-20 23:30:24.418316 | self.instance, self.node = 
self.boot_instance()
  2016-12-20 23:30:24.418361 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/ironic_tempest_plugin/tests/scenario/baremetal_manager.py",
 line 173, in boot_instance
  2016-12-20 23:30:24.418375 | self.wait_node(instance['id'])
  2016-12-20 23:30:24.418417 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/ironic_tempest_plugin/tests/scenario/baremetal_manager.py",
 line 117, in wait_node
  2016-12-20 23:30:24.418441 | raise lib_exc.TimeoutException(msg)
  2016-12-20 23:30:24.418464 | tempest.lib.exceptions.TimeoutException: 
Request timed out
  2016-12-20 23:30:24.418494 | Details: Timed out waiting to get Ironic 
node by instance id 50e23a00-5b92-49b7-8dd0-5b8715ba7e26

  Nova compute seems stuck at "_do_build_and_run_instance
  /opt/stack/new/nova/nova/compute/manager.py:1754"

  2016-12-21 13:24:24.307 21735 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received message with unique_id: 3b9dab54da604a8cadc6c854588a1a5d __call__ 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:196
  2016-12-21 13:24:24.312 21735 DEBUG oslo_concurrency.lockutils 
[req-7b291e0c-c5b3-4a8a-b4db-e7cef3150b03 tempest-BaremetalBasicOps-1775111554 
tempest-BaremetalBasicOps-1775111554] Lock 
"6376a75b-2970-42f5-9f1b-b34db22a23e4" acquired by 
"nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
  2016-12-21 13:24:24.313 21735 DEBUG oslo_messaging._drivers.amqpdriver 
[req-7b291e0c-c5b3-4a8a-b4db-e7cef3150b03 tempest-BaremetalBasicOps-1775111554 
tempest-BaremetalBasicOps-1775111554] CALL msg_id: 
92cc73436d164feab727c5b7c81ec179 exchange 'nova' topic 'conductor' _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:442
  2016-12-21 13:24:24.326 21735 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: 92cc73436d164feab727c5b7c81ec179 __call__ 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:299
  2016-12-21 13:24:24.327 21735 DEBUG nova.compute.manager 
[req-7b291e0c-c5b3-4a8a-b4db-e7cef3150b03 tempest-BaremetalBasicOps-1775111554 
tempest-BaremetalBasicOps-1775111554] [instance: 
6376a75b-2970-42f5-9f1b-b34db22a23e4] Starting instance... 
_do_build_and_run_instance /opt/stack/new/nova/nova/compute/manager.py:1754
  2016-12-21 13:24:24.330 21735 DEBUG oslo_messaging._drivers.amqpdriver 
[req-7b291e0c-c5b3-4a8a-b4db-e7cef3150b03 tempest-BaremetalBasicOps-1775111554 
tempest-BaremetalBasicOps-1775111554] CALL msg_id: 
15898ce761a143c690ea51c6af5d4f23 exchange 'nova' topic 'conductor' _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:442
  2016-12-21 13:24:24.367 21735 DEBUG nova.compute.resource_tracker 
[req

[Yahoo-eng-team] [Bug 1653986] [NEW] Many views are using identical table templates

2017-01-04 Thread Rob Cresswell
Public bug reported:

Many of our table views are using:

{% extends 'base.html' %}
{% block title %}{{ page_title }}{% endblock %}
{% block main %}{{ table.render }}{% endblock %}

as a template. We should make a common template and remove these.

** Affects: horizon
 Importance: Wishlist
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
Milestone: None => ocata-3

** Changed in: horizon
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1653986

Title:
  Many views are using identical table templates

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Many of our table views are using:

  {% extends 'base.html' %}
  {% block title %}{{ page_title }}{% endblock %}
  {% block main %}{{ table.render }}{% endblock %}

  as a template. We should make a common template and remove these.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1653986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573073] Re: [SRU] When router has no ports _process_updated_router fails because the namespace does not exist

2017-01-04 Thread James Page
This bug was fixed in the package neutron - 2:7.2.0-0ubuntu1~cloud1
---

 neutron (2:7.2.0-0ubuntu1~cloud1) trusty-liberty; urgency=medium
 .
   * Fix router namespace cleanup (LP: #1573073)
 - d/p/ns-exists-before-get-devices.patch


** Changed in: cloud-archive/liberty
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573073

Title:
  [SRU] When router has no ports _process_updated_router fails because
  the namespace does not exist

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive liberty series:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive newton series:
  Fix Committed
Status in neutron:
  In Progress
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Committed
Status in neutron source package in Yakkety:
  Fix Committed
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Description]
  [Testcase]
  Happens in Kilo. Cannot test on other releases.

  Steps to reproduce:

  1) create a router and set at least a port, also the gateway is fine
  2) check that the namespace exists with
     ip netns show | grep qrouter-
  3) check the ports are there
     ip netns exec qrouter- ip addr show
  4) delete all ports from the router
  5) check that only loopback interface is present
     ip netns exec qrouter- ip addr show
  6) run the cronjob task that is installed in the file
     /etc/cron.d/neutron-l3-agent-netns-cleanup
  so basically run this command:
     /usr/bin/neutron-netns-cleanup --config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/l3_agent.ini
  7) the namespace should be gone:
     ip netns show | grep qrouter-
  8) delete the neutron router.
  9) check log file /var/log/neutron/vpn-agent.log

  When the router has no ports the namespace is deleted from the network
  node by the cronjob. However this brakes the router updates and the
  file vpn-agent.log is flooded with this traces:

  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Traceback 
(most recent call last):
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 343, in call
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 628, 
in process
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
self._process_internal_ports()
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 404, 
in _process_internal_ports
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
existing_devices = self._get_existing_devices()
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 328, 
in _get_existing_devices
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info ip_devs 
= ip_wrapper.get_devices(exclude_loopback=True)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 102, in 
get_devices
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 137, in 
execute
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info raise 
RuntimeError(m)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info RuntimeError:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762', 'find', 
'/sys/class/net', '-maxdepth', '1', '-type', 'l', '-printf', '%f ']
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Exit code: 1
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdin:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdout:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stderr: 
Cannot open network namespace "qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762": 
No such file or directory
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info
  2016-04-21 16:22:17.774 23382 ERROR neutron.agent.l3.agent [-] Failed to 
process compatible router '8fc0f640-35bb-4d0b-bbbd-80c22be0e

[Yahoo-eng-team] [Bug 1653975] [NEW] heatclient spam in test runs

2017-01-04 Thread Rob Cresswell
Public bug reported:

Current test output is full of

WARNING:heatclient.common.base:Two objects are equal when all of the
attributes are equal, if you want to identify whether two objects are
same one with same id, please use is_same_obj() function.

This is identical to https://bugs.launchpad.net/horizon/+bug/1536892.
The solution there no longer works because of a path change I believe.

We need to update
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/test_data/heat_data.py#L24
or get heat to fix their client.

** Affects: heat
 Importance: Undecided
 Status: New

** Affects: horizon
 Importance: High
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
Milestone: None => ocata-3

** Changed in: horizon
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1653975

Title:
  heatclient spam in test runs

Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Current test output is full of

  WARNING:heatclient.common.base:Two objects are equal when all of the
  attributes are equal, if you want to identify whether two objects are
  same one with same id, please use is_same_obj() function.

  This is identical to https://bugs.launchpad.net/horizon/+bug/1536892.
  The solution there no longer works because of a path change I believe.

  We need to update
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/test_data/heat_data.py#L24
  or get heat to fix their client.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1653975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653430] Re: Launch Instance vm Starting up

2017-01-04 Thread Lee Yarwood
Unfortunately this isn't a valid or supported method for deploying nova-
compute.

That said are you sure virt_type is qemu and not kvm?

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653430

Title:
   Launch Instance vm Starting up

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  [root@controller ~]# virsh  list
   IdName   State
  
   10instance-001a  running



  [root@controller ~]# cat /var/log/libvirt/qemu/instance-001a.log 
  2017-01-01 15:27:19.429+: starting up libvirt version: 2.0.0, package: 
10.el7_3.2 (CentOS BuildSystem , 2016-12-06-19:53:38, 
c1bm.rdu2.centos.org), qemu version: 2.6.0 (qemu-kvm-ev-2.6.0-27.1.el7), 
hostname: controller
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin 
HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm 
-name guest=instance-001a,debug-threads=on -S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-10-instance-001a/master-key.aes
 -machine pc-i440fx-rhel7.3.0,accel=tcg,usb=off -cpu 
Haswell-noTSX,+vme,+ds,+ss,+ht,+osxsave,+f16c,+rdrand,+hypervisor,+arat,+tsc_adjust,+xsaveopt,+pdpe1gb,+abm
 -m 64 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 
8cbaa40d-c061-47fc-83df-698f2455c7d9 -smbios 
'type=1,manufacturer=RDO,product=OpenStack 
Compute,version=14.0.2-1.el7,serial=4803ff26-9107-4c8c-b9f9-83cda5553350,uuid=8cbaa40d-c061-47fc-83df-698f2455c7d9,family=Virtual
 Machine' -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-10-instance-001a/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=/var/lib/nova/instances/8cbaa40d-c061-47fc-83df-698f2455c7d9/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=26,id=hostnet0 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:3e:38:f9,bus=pci.0,addr=0x3 
-add-fd set=1,fd=29 -chardev file,id=charserial0,path=/dev/fdset/1,append=on 
-device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 
-device isa-serial,chardev=charserial1,id=serial1 -device 
usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.40.1.70:0 -k en-us -device 
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
  char device redirected to /dev/pts/0 (label charserial1)
  warning: TCG doesn't support requested feature: CPUID.01H:EDX.vme [bit 1]
  warning: TCG doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
  warning: TCG doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.fma [bit 12]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.pcid [bit 17]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.x2apic [bit 21]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.tsc-deadline 
[bit 24]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.osxsave [bit 27]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.avx [bit 28]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.f16c [bit 29]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.rdrand [bit 30]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.tsc_adjust [bit 
1]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.avx2 [bit 5]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.erms [bit 9]
  warning: TCG doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10]


  
  error:
  Instance status   Starting up ...



  
  nova use  qemu

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653967] Re: nova (newton) raises ConfigFileValueError for urls with dashess

2017-01-04 Thread George Shuklin
I found source of the bug: python-rfc3986 is to blame (it is used by
oslo-config). Version  0.2.0-2 contains bug which violates RFC3986. It
was fixed in 0.2.2. Version of python-rfc3986 from zesty (0.3.1-2) fix
this problem.

I believe this bug should be fixed by bumping up version of python-
rfc3986 in UCA to 0.2.2 or higher.

** Also affects: python-rfc3986 (Ubuntu)
   Importance: Undecided
   Status: New

** Summary changed:

- nova (newton) raises ConfigFileValueError for urls with dashess
+ nova (newton) raises ConfigFileValueError for urls with dashes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653967

Title:
  nova (newton) raises ConfigFileValueError for urls with dashes

Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  New
Status in python-rfc3986 package in Ubuntu:
  New

Bug description:
  nova version: newton
  dpkg version: 2:14.0.1-0ubuntu1~cloud0
  distribution: nova @ xenial with ubuntu cloud archive, amd64.

  Nova fails with exception  ConfigFileValueError: Value for option url
  is not valid: invalid URI: if url parameter of [neutron] section or
  novncproxy_base_url parameter contains dashes in url.

  Steps to reproduce:

  Take a working openstack with nova+neutron.

  Put (in [neutron] section) url= http://nodash.example.com:9696  - it
  works

  Put url = http://with-dash.example.com:9696 - it fails with exception:

  
  nova[18937]: TRACE Traceback (most recent call last):
  nova[18937]: TRACE   File "/usr/bin/nova-api-os-compute", line 10, in 
  nova[18937]: TRACE sys.exit(main())
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api_os_compute.py", line 51, in main
  nova[18937]: TRACE service.wait()
  nova[18937]: TRACE   File "/usr/lib/python2.7/dist-packages/nova/service.py", 
line 415, in wait
  nova[18937]: TRACE _launcher.wait()
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
  nova[18937]: TRACE self.conf.log_opt_values(LOG, logging.DEBUG)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
  nova[18937]: TRACE _sanitize(opt, getattr(group_attr, opt_name)))
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
  nova[18937]: TRACE return self._conf._get(name, self._group)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
  nova[18937]: TRACE value = self._do_get(name, group, namespace)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
  nova[18937]: TRACE % (opt.name, str(ve)))
  nova[18937]: TRACE ConfigFileValueError: Value for option url is not valid: 
invalid URI: 'http://with-dash.example.com:9696'.

  Expected behavior: do not crash.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649887] Re: failures during stale subports removal not reflected in trunk status

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/410780
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9bab88ba83a6f395cfec5d1351a1597c05fee607
Submitter: Jenkins
Branch:master

commit 9bab88ba83a6f395cfec5d1351a1597c05fee607
Author: Armando Migliaccio 
Date:   Wed Dec 14 05:12:32 2016 -0800

Account for unwire failures during OVS trunk rewiring operations

If a failure occurs while unwiring stale ports, we currently
ignore the outcome of the operation, whereas we should at least
warn the user that something is not quite as it is supposed
to be.

This patch makes sure that trunk status is accurately reflected
after a sequence of unwire+wire operations for a given trunk.

Closes-bug: #1649887

Change-Id: I3b6ed57e00c0146babe23ea6ed0ca14e83020d26


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649887

Title:
  failures during stale subports removal not reflected in trunk status

Status in neutron:
  Fix Released

Bug description:
  Noticed while looking at latest code [1] for OVS trunk driver. the
  status of the unwire operation is not accounted for.

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/services/trunk/drivers/openvswitch/agent/ovsdb_handler.py#L387

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653080] Re: Booting from image in ceph is considered to booting from volume

2017-01-04 Thread Lee Yarwood
*** This bug is a duplicate of bug 1587802 ***
https://bugs.launchpad.net/bugs/1587802

My apologies, I assumed we were talking about resizing up and I had to
re-read your description to see that this was actually about resizing
down. The following review moved _is_booted_from_volume to use
block_device_info and appears to correct this on master, I'll look into
a stable/newton backport shortly :

libvirt: Improve _is_booted_from_volume implementation
https://review.openstack.org/#/c/382024/

Again using my local env this now WORKSFORME when I attempt to resize
down :

$  sudo rbd -p vms ls -l
NAME   SIZE PARENT  
 FMT PROT LOCK 
67e542f2-6a21-4363-818b-4ed58be529dd_disk 5120M 
images/ec027d6b-f677-40fd-b3c9-f0d30ef460de@snap   2 
$ nova resize test-resize 1
$ grep ResizeError ../logs/n-cpu.log
2017-01-04 08:13:24.779 TRACE oslo_messaging.rpc.server ResizeError: Resize 
error: Unable to resize disk down.
$  sudo rbd -p vms ls -l
NAME   SIZE PARENT  
 FMT PROT LOCK 
67e542f2-6a21-4363-818b-4ed58be529dd_disk 5120M 
images/ec027d6b-f677-40fd-b3c9-f0d30ef460de@snap   2 


** Changed in: nova
   Status: Incomplete => Triaged

** Changed in: nova
   Importance: Undecided => Medium

** This bug has been marked a duplicate of bug 1587802
   libvirt resize down prevention is invalid when using rbd as backend

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653080

Title:
  Booting from image in ceph is considered to booting from volume

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  
  Openstack Mitaka
  glance backend: ceph
  nova backend: kvm+ceph
  cinder backend: ceph

  Steps to reproduce
  ==
  1.create an instance booting from image with flavor m1.small;
  2.nova resize $instance_id m1.tiny;

  Actual result
  =
  instance resize successfully, but instance's root disk does not change

  Expected result
  ===
  nova-api should raise a ResizeError exception.

  booted_from_volume = self._is_booted_from_volume(instance, disk_info_text)
  if (root_down and not booted_from_volume) or ephemeral_down:
reason = _("Unable to resize disk down.")
raise exception.InstanceFaultRollback(
exception.ResizeError(reason=reason))

  I think the function "_is_booted_from_volume" is wrong:
  @staticmethod
  def _is_booted_from_volume(instance, disk_mapping):
return ((not bool(instance.get('image_ref')))
or 'disk' not in disk_mapping)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653967] [NEW] nova (newton) raises ConfigFileValueError for urls with dashess

2017-01-04 Thread George Shuklin
Public bug reported:

nova version: newton
dpkg version: 2:14.0.1-0ubuntu1~cloud0
distribution: nova @ xenial with ubuntu cloud archive, amd64.

Nova fails with exception  ConfigFileValueError: Value for option url is
not valid: invalid URI: if url parameter of [neutron] section or
novncproxy_base_url parameter contains dashes in url.

Steps to reproduce:

Take a working openstack with nova+neutron.

Put (in [neutron] section) url= http://nodash.example.com:9696  - it
works

Put url = http://with-dash.example.com:9696 - it fails with exception:


nova[18937]: TRACE Traceback (most recent call last):
nova[18937]: TRACE   File "/usr/bin/nova-api-os-compute", line 10, in 
nova[18937]: TRACE sys.exit(main())
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api_os_compute.py", line 51, in main
nova[18937]: TRACE service.wait()
nova[18937]: TRACE   File "/usr/lib/python2.7/dist-packages/nova/service.py", 
line 415, in wait
nova[18937]: TRACE _launcher.wait()
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
nova[18937]: TRACE self.conf.log_opt_values(LOG, logging.DEBUG)
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
nova[18937]: TRACE _sanitize(opt, getattr(group_attr, opt_name)))
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
nova[18937]: TRACE return self._conf._get(name, self._group)
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
nova[18937]: TRACE value = self._do_get(name, group, namespace)
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
nova[18937]: TRACE % (opt.name, str(ve)))
nova[18937]: TRACE ConfigFileValueError: Value for option url is not valid: 
invalid URI: 'http://with-dash.example.com:9696'.

Expected behavior: do not crash.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653967

Title:
  nova (newton) raises ConfigFileValueError for urls with dashess

Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  New

Bug description:
  nova version: newton
  dpkg version: 2:14.0.1-0ubuntu1~cloud0
  distribution: nova @ xenial with ubuntu cloud archive, amd64.

  Nova fails with exception  ConfigFileValueError: Value for option url
  is not valid: invalid URI: if url parameter of [neutron] section or
  novncproxy_base_url parameter contains dashes in url.

  Steps to reproduce:

  Take a working openstack with nova+neutron.

  Put (in [neutron] section) url= http://nodash.example.com:9696  - it
  works

  Put url = http://with-dash.example.com:9696 - it fails with exception:

  
  nova[18937]: TRACE Traceback (most recent call last):
  nova[18937]: TRACE   File "/usr/bin/nova-api-os-compute", line 10, in 
  nova[18937]: TRACE sys.exit(main())
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api_os_compute.py", line 51, in main
  nova[18937]: TRACE service.wait()
  nova[18937]: TRACE   File "/usr/lib/python2.7/dist-packages/nova/service.py", 
line 415, in wait
  nova[18937]: TRACE _launcher.wait()
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
  nova[18937]: TRACE self.conf.log_opt_values(LOG, logging.DEBUG)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
  nova[18937]: TRACE _sanitize(opt, getattr(group_attr, opt_name)))
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
  nova[18937]: TRACE return self._conf._get(name, self._group)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
  nova[18937]: TRACE value = self._do_get(name, group, namespace)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
  nova[18937]: TRACE % (opt.name, str(ve)))
  nova[18937]: TRACE ConfigFileValueError: Value for option url is not valid: 
invalid URI: 'http://with-dash.example.com:9696'.

  Expected behavior: do not crash.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653792] Re: admin dashboard appearing when it shouldn't

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/416356
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=5bce9a02509d123dcf1c96e43562dd7ed80a0f05
Submitter: Jenkins
Branch:master

commit 5bce9a02509d123dcf1c96e43562dd7ed80a0f05
Author: David Lyle 
Date:   Tue Jan 3 14:30:43 2017 -0700

Fix single policy rule handling

With commit 43e9df85ab286ddee96e9cff97f551781baf70d1 the handling
of single policy rules was broken and always returned True for a
single rule.

One of the visible results is that the Admin Dashboard showed up
incorrectly for users that lacked permission to see it. Additionally,
panel in the Admin Dashboard were also visible.

This patch fixes single rule handling, and the visible effects.

Closes-Bug: #1653792
Change-Id: I0c8a0d7b230b6c6b7ee048af84646ca95daee340


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1653792

Title:
  admin dashboard appearing when it shouldn't

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  With commit
  
https://github.com/openstack/horizon/commit/90f43f3356a889a54464a6ddad81a1ca2b9f6290
  the handling of single rules was broken and was always returning True.
  This resulted in the admin dashboard showing up incorrectly as well as
  panels showing up inappropriately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1653792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653960] [NEW] Modal header should default to page_header

2017-01-04 Thread Rob Cresswell
Public bug reported:

Many modals have a modal_header value that is identical to page_header;
we could just default to page_header and remove the duplication.

** Affects: horizon
 Importance: Wishlist
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
   Importance: Undecided => Wishlist

** Changed in: horizon
Milestone: None => ocata-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1653960

Title:
  Modal header should default to page_header

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Many modals have a modal_header value that is identical to
  page_header; we could just default to page_header and remove the
  duplication.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1653960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520159] Re: HTTP response codes should be extracted to constants

2017-01-04 Thread Dinesh Bhor
** Also affects: masakari
   Importance: Undecided
   Status: New

** Changed in: masakari
 Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1520159

Title:
  HTTP response codes should be extracted to constants

Status in Glance:
  Fix Released
Status in masakari:
  New

Bug description:
  There are several places in the source code where HTTP response codes
  are used as numeric values. These values should be extracted to a
  common file and the numeric values should be replaced by constants.

  For example:
  common/auth.py:186
elif resp.status == 404: --> elif resp.status == HTTP_NOT_FOUND;
  api/middleware/cache.py:261
if method == 'GET' and status_code == 204: --> if method == 'GET' and 
status_code == HTTP_NO_CONTENT:

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1520159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653953] [NEW] Unable to remove snapshots after an instance is unshelved when using the rbd imagebackend

2017-01-04 Thread Lee Yarwood
Public bug reported:

Description
===

I'm not entirely convinced that this is a bug but wanted to document and
discuss this upstream.

When using the rbd imagebackend, snapshots used to shelve an instance
cannot be removed after unshelving as they are cloned and as a result
are now the parents of the recreated instance disks.

This is in line with the behaviour of the imagebackend when initially
spawning an instance from an image but has caused confusion for
operators downstream who assume that the snapshot can be removed once
the instance has been unshelved.

We could flatten the instance disk when spawning during an unshelve but
to do so would mean extending the imagebackend to handle yet another
corner case for rbd.

Steps to reproduce
==

$ nova boot --image cirros-raw --flavor 1 test-shelve
[..]
$ nova shelve test-shelve
[..]
$ nova unshelve test-shelve
[..]
$ sudo rbd -p vms ls -l
NAME   SIZE PARENT  
 FMT PROT LOCK 
4c843671-879d-4ba6-b4e8-8eefdced5393_disk 1024M 
images/df96af36-5a97-4f47-a79f-f3f3c85a21d9@snap   2 
$ glance image-delete df96af36-5a97-4f47-a79f-f3f3c85a21d9
Unable to delete image 'df96af36-5a97-4f47-a79f-f3f3c85a21d9' because it is in 
use.

We can easily workaround this by manually flattening the instance disk :

$ nova stop test-shelve
$ sudo rbd -p vms flatten 4c843671-879d-4ba6-b4e8-8eefdced5393_disk
Image flatten: 100% complete...done.
$ nova start test-shelve
$ glance image-delete df96af36-5a97-4f47-a79f-f3f3c85a21d9

Expected result
===
Able to remove the shelved snapshot from Glance after unshelve.

Actual result
=
Unable to remove the shelved snapshot from Glance after unshelve.

Environment
===
1. Exact version of OpenStack you are running. See the following
   list for all releases: http://docs.openstack.org/releases/

   $ pwd
   /opt/stack/nova
   $ git rev-parse HEAD
   d768bfa2c2fb774154a5268f58b28537f7b39f69
   
2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

   libvirt + kvm

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   ceph

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

   N/A

Logs & Configs
==

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653953

Title:
  Unable to remove snapshots after an instance is unshelved when using
  the rbd imagebackend

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  I'm not entirely convinced that this is a bug but wanted to document
  and discuss this upstream.

  When using the rbd imagebackend, snapshots used to shelve an instance
  cannot be removed after unshelving as they are cloned and as a result
  are now the parents of the recreated instance disks.

  This is in line with the behaviour of the imagebackend when initially
  spawning an instance from an image but has caused confusion for
  operators downstream who assume that the snapshot can be removed once
  the instance has been unshelved.

  We could flatten the instance disk when spawning during an unshelve
  but to do so would mean extending the imagebackend to handle yet
  another corner case for rbd.

  Steps to reproduce
  ==

  $ nova boot --image cirros-raw --flavor 1 test-shelve
  [..]
  $ nova shelve test-shelve
  [..]
  $ nova unshelve test-shelve
  [..]
  $ sudo rbd -p vms ls -l
  NAME   SIZE PARENT
   FMT PROT LOCK 
  4c843671-879d-4ba6-b4e8-8eefdced5393_disk 1024M 
images/df96af36-5a97-4f47-a79f-f3f3c85a21d9@snap   2 
  $ glance image-delete df96af36-5a97-4f47-a79f-f3f3c85a21d9
  Unable to delete image 'df96af36-5a97-4f47-a79f-f3f3c85a21d9' because it is 
in use.

  We can easily workaround this by manually flattening the instance disk
  :

  $ nova stop test-shelve
  $ sudo rbd -p vms flatten 4c843671-879d-4ba6-b4e8-8eefdced5393_disk
  Image flatten: 100% complete...done.
  $ nova start test-shelve
  $ glance image-delete df96af36-5a97-4f47-a79f-f3f3c85a21d9

  Expected result
  ===
  Able to remove the shelved snapshot from Glance after unshelve.

  Actual result
  =
  Unable to remove the shelved snapshot from Glance after unshelve.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
 list for all releases: http://docs.openstack.org/releases/

 $ pwd
 /opt/stack/nova
 $ git rev-parse HEAD
 d768bfa2c2fb774154a5268f58b28537f7b39f69
 
  2. Which hypervisor did you use?
 (For example: Libvirt + KVM,

[Yahoo-eng-team] [Bug 1653932] [NEW] network router:external field not exported

2017-01-04 Thread Maurice Schreiber
Public bug reported:

Hi, I want to use the network RBAC Feature to give 'access_as_external'
to a target project so that this project is able to allocate floating
IPs from that external network (without being owner or admin).

Let's say the external network has two subnets.

So far so good, but if I want to select one subnet on floating ip
allocation, that is not possible. I don't see that subnet (I see the
IDs, but this is useless for the decision what subnet to choose the
floating ip from) and I can't give access in the policy based on the
'router:external' field of the network.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1653932

Title:
  network router:external field not exported

Status in neutron:
  New

Bug description:
  Hi, I want to use the network RBAC Feature to give
  'access_as_external' to a target project so that this project is able
  to allocate floating IPs from that external network (without being
  owner or admin).

  Let's say the external network has two subnets.

  So far so good, but if I want to select one subnet on floating ip
  allocation, that is not possible. I don't see that subnet (I see the
  IDs, but this is useless for the decision what subnet to choose the
  floating ip from) and I can't give access in the policy based on the
  'router:external' field of the network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1653932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641220] Re: WARNING:stevedore.named:Could not load neutron.ml2.sriov

2017-01-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/413439
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e9efe86856c77721e289654bea8629cd1d160b76
Submitter: Jenkins
Branch:master

commit e9efe86856c77721e289654bea8629cd1d160b76
Author: Moshe Levi 
Date:   Wed Dec 21 08:26:59 2016 +0200

SR-IOV: remove ml2_conf_sriov.ini from oslo-config-generator

This I42dadfd0b62730ca2d34d37cb63f19f6fec75567 patch
remove the supported_pci_vendor_devs option and now
no additional options are required for sriov ml2 mech
driver. The is a clean up patch to remove also the
ml2_conf_sriov.ini from the oslo-config-generator.

Closes-Bug: #1641220

Change-Id: Ida6c0930ce65169a9bc59ef80d6b427b2d5d4e09


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641220

Title:
  WARNING:stevedore.named:Could not load neutron.ml2.sriov

Status in neutron:
  Fix Released

Bug description:
  Observed during pep8 and more particularly during:

  generate_config_file_samples.sh

  
  
http://logs.openstack.org/65/357865/21/check/gate-neutron-pep8-ubuntu-xenial/3f839b8/console.html#_2016-11-11_13_48_45_047026

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1641220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653904] Re: type of "stats['replication_enabled']" in "rbd.py" is not correct

2017-01-04 Thread Zhao Liqiang
** Project changed: nova => cinder

** Changed in: cinder
 Assignee: (unassigned) => Zhao Liqiang (zhoaliqiang2017)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653904

Title:
  type of "stats['replication_enabled']" in "rbd.py" is not correct

Status in Cinder:
  New

Bug description:
  In file cinder/volume/drivers/rbd.py, the type of
  stats['replication_enabled'] is not correct.This variable is used in
  method  "_update_volume_stats" to report cluster status to scheduler.
  Scheduler need this variable a string type when filtering host, but
  the current type is boolean. When creating a volume, the scheduler may
  report an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1653904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653904] [NEW] type of "stats['replication_enabled']" in "rbd.py" is not correct

2017-01-04 Thread Zhao Liqiang
Public bug reported:

In file cinder/volume/drivers/rbd.py, the type of
stats['replication_enabled'] is not correct.This variable is used in
method  "_update_volume_stats" to report cluster status to scheduler.
Scheduler need this variable a string type when filtering host, but the
current type is boolean. When creating a volume, the scheduler may
report an error.

** Affects: cinder
 Importance: Undecided
 Assignee: Zhao Liqiang (zhoaliqiang2017)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653904

Title:
  type of "stats['replication_enabled']" in "rbd.py" is not correct

Status in Cinder:
  New

Bug description:
  In file cinder/volume/drivers/rbd.py, the type of
  stats['replication_enabled'] is not correct.This variable is used in
  method  "_update_volume_stats" to report cluster status to scheduler.
  Scheduler need this variable a string type when filtering host, but
  the current type is boolean. When creating a volume, the scheduler may
  report an error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1653904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp