[Yahoo-eng-team] [Bug 1612494] [NEW] neutronv2 unit tests fail with python-neutronclient 5.1.0

2016-08-11 Thread Matt Riedemann
Public bug reported:

Caught here:

https://review.openstack.org/#/c/353443/

http://logs.openstack.org/43/353443/2/check/gate-cross-nova-python27-db-
ubuntu-xenial/2b8780d/console.html#_2016-08-11_20_43_00_664121

2016-08-11 20:43:00.664195 | 
nova.tests.unit.api.openstack.compute.test_neutron_security_groups.TestNeutronSecurityGroupsV21.test_disassociate
2016-08-11 20:43:00.664238 | 
-
2016-08-11 20:43:00.664251 | 
2016-08-11 20:43:00.664270 | Captured traceback:
2016-08-11 20:43:00.664288 | ~~~
2016-08-11 20:43:00.664310 | Traceback (most recent call last):
2016-08-11 20:43:00.664351 |   File 
"nova/tests/unit/api/openstack/compute/test_neutron_security_groups.py", line 
333, in test_disassociate
2016-08-11 20:43:00.664380 | self.manager._removeSecurityGroup(req, 
UUID_SERVER, body)
2016-08-11 20:43:00.664409 |   File "nova/api/openstack/extensions.py", 
line 370, in wrapped
2016-08-11 20:43:00.664438 | raise 
webob.exc.HTTPInternalServerError(explanation=msg)
2016-08-11 20:43:00.664490 | webob.exc.HTTPInternalServerError: Unexpected 
API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.
2016-08-11 20:43:00.664515 | 
2016-08-11 20:43:00.664528 | 
2016-08-11 20:43:00.664541 | 
2016-08-11 20:43:00.664559 | Captured pythonlogging:
2016-08-11 20:43:00.664578 | ~~~
2016-08-11 20:43:00.664616 | 2016-08-11 20:37:58,280 ERROR 
[nova.api.openstack.extensions] Unexpected exception in API method
2016-08-11 20:43:00.664638 | Traceback (most recent call last):
2016-08-11 20:43:00.664667 |   File "nova/api/openstack/extensions.py", 
line 338, in wrapped
2016-08-11 20:43:00.664688 | return f(*args, **kwargs)
2016-08-11 20:43:00.664725 |   File 
"nova/api/openstack/compute/security_groups.py", line 424, in 
_removeSecurityGroup
2016-08-11 20:43:00.664746 | context, id, group_name)
2016-08-11 20:43:00.664778 |   File 
"nova/api/openstack/compute/security_groups.py", line 391, in _invoke
2016-08-11 20:43:00.664815 | method(context, instance, group_name)
2016-08-11 20:43:00.664852 |   File 
"nova/network/security_group/neutron_driver.py", line 491, in 
remove_from_instance
2016-08-11 20:43:00.664872 | context.project_id)
2016-08-11 20:43:00.664937 |   File 
"/home/jenkins/workspace/gate-cross-nova-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/neutronclient/neutron/v2_0/__init__.py",
 line 61, in find_resourceid_by_name_or_id
2016-08-11 20:43:00.664960 | parent_id, fields='id')['id']
2016-08-11 20:43:00.665025 |   File 
"/home/jenkins/workspace/gate-cross-nova-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/neutronclient/neutron/v2_0/__init__.py",
 line 52, in find_resource_by_name_or_id
2016-08-11 20:43:00.665055 | return client.find_resource(resource, 
name_or_id, project_id,
2016-08-11 20:43:00.665086 | AttributeError: 'MockClient' object has no 
attribute 'find_resource'

These are the changes in 5.1.0:

https://github.com/openstack/python-neutronclient/compare/5.0.0...5.1.0

Looks like this is the breaking change:

https://review.openstack.org/#/c/348096/

Probably fails in nova unit tests because we use mox to mock the
neutronclient

** Affects: nova
 Importance: High
 Status: Confirmed

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1612494

Title:
  neutronv2 unit tests fail with python-neutronclient 5.1.0

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Caught here:

  https://review.openstack.org/#/c/353443/

  http://logs.openstack.org/43/353443/2/check/gate-cross-nova-python27
  -db-ubuntu-xenial/2b8780d/console.html#_2016-08-11_20_43_00_664121

  2016-08-11 20:43:00.664195 | 
nova.tests.unit.api.openstack.compute.test_neutron_security_groups.TestNeutronSecurityGroupsV21.test_disassociate
  2016-08-11 20:43:00.664238 | 
-
  2016-08-11 20:43:00.664251 | 
  2016-08-11 20:43:00.664270 | Captured traceback:
  2016-08-11 20:43:00.664288 | ~~~
  2016-08-11 20:43:00.664310 | Traceback (most recent call last):
  2016-08-11 20:43:00.664351 |   File 
"nova/tests/unit/api/openstack/compute/test_neutron_security_groups.py", line 
333, in test_disassociate
  2016-08-11 20:43:00.664380 | self.manager._removeSecurityGroup(req, 
UUID_SERVER, body)
  2016-08-11 20:43:00.664409 |   File "nova/api/openstack/extensions.py", 
line 370, in wrapped
  2016-08-11 

[Yahoo-eng-team] [Bug 1612491] [NEW] Glance metadefs for OS::Nova::Aggregate should be OS::Nova::HostAggregate

2016-08-11 Thread Travis Tripp
Public bug reported:

https://github.com/openstack/glance/search?utf8=%E2%9C%93=aggregate

The metadata definitions in etc/metadefs allow each namespace to be
associated with a resource type in OpenStack. We realized that we used
OS::Nova::Aggregate instead of OS::Nova::HostAggregate in Glance. This
doesn’t align with Heat [0] or Searchlight [1]. It should be noted that
Heat added the resource type after Glance had metadefs.

Glance Issue:

There are a couple of metadef files that have OS::Nova::Aggregate that
need to change to OS::Nova::HostAggregate. I see also that OS::Nova: is
in one of the db scripts. That script simply adds some initial "resource
types" to the database. [3]. It should be noted that there is no hard
dependency on that resource type in the DB script. You can add new
resource types at any time via API or JSON files and they are
automatically added.

The DB will need to be upgraded similar to
https://review.openstack.org/#/c/272271/

Horizon Issue:

The aggregate update metadata action should retrieve
OS::Nova::HostAggregate instead. The Horizon patch shouldn't merge until
the glance patch merges, but there is not an actual hard dependency
between the two.  Horizon may need to request both in order to be
backwards compatible.

[0] 
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::HostAggregate
[1] 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/metadata/metadata.service.js#L86
[3] https://github.com/openstack/glance/search?utf8=%E2%9C%93=aggregate

The main concern I have is that with the new resource registry, we will
register Host Aggregates as OS::Nova::HostAggregate.  It just won't
align.

The overall changes here are simple, but may have backwards
compatibility concerns with older Glance installations.

Finally:

It should be noted that updating namespaces in Glance is already
possible with glance-manage. E.g.

/opt/stack/glance$ glance-manage db_load_metadefs etc/metadefs -h
usage: glance-manage db_load_metadefs [-h]
  [path] [merge] [prefer_new] [overwrite]

positional arguments:
  path
  merge
  prefer_new
  overwrite

So, you just have to call:

/opt/stack/glance$ glance-manage db_load_metadefs etc/metadefs true true

See also: https://youtu.be/zJpHXdBOoeM

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: horizon
 Importance: Undecided
 Status: New

** Also affects: horizon
   Importance: Undecided
   Status: New

** Description changed:

  https://github.com/openstack/glance/search?utf8=%E2%9C%93=aggregate
  
  The metadata definitions in etc/metadefs allow each namespace to be
  associated with a resource type in OpenStack. We realized that we used
  OS::Nova::Aggregate instead of OS::Nova::HostAggregate in Glance. This
  doesn’t align with Heat [0] or Searchlight [1]. It should be noted that
  Heat added the resource type after Glance had metadefs.
  
  Glance Issue:
  
  There are a couple of metadef files that have OS::Nova::Aggregate that
  need to change to OS::Nova::HostAggregate. I see also that OS::Nova: is
  in one of the db scripts. That script simply adds some initial "resource
  types" to the database. [3]. It should be noted that there is no hard
  dependency on that resource type in the DB script. You can add new
  resource types at any time via API or JSON files and they are
  automatically added.
  
  The DB will need to be upgraded similar to
  https://review.openstack.org/#/c/272271/
  
  Horizon Issue:
  
  The aggregate update metadata action should retrieve
  OS::Nova::HostAggregate instead. The Horizon patch shouldn't merge until
  the glance patch merges, but there is not an actual hard dependency
  between the two.  Horizon may need to request both in order to be
  backwards compatible.
  
  [0] 
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::HostAggregate
  [1] 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/metadata/metadata.service.js#L86
  [3] https://github.com/openstack/glance/search?utf8=%E2%9C%93=aggregate
  
+ The main concern I have is that with the new resource registry, we will
+ register Host Aggregates as OS::Nova::HostAggregate.  It just won't
+ align.
+ 
+ The overall changes here are simple, but may have backwards
+ compatibility concerns with older Glance installations.
+ 
  Finally:
  
  It should be noted that updating namespaces in Glance is already
  possible with glance-manage. E.g.
  
  /opt/stack/glance$ glance-manage db_load_metadefs etc/metadefs -h
  usage: glance-manage db_load_metadefs [-h]
-   [path] [merge] [prefer_new] [overwrite]
+   [path] [merge] [prefer_new] [overwrite]
  
  positional arguments:
-   path
-   merge
-   prefer_new
-   overwrite
+   path
+   merge
+   prefer_new
+   overwrite
  
  So, you just have to call:
  
 

[Yahoo-eng-team] [Bug 1612485] [NEW] 500 Error is returned when specifying string that is partial matched to "application/json" as format of HTTP header.

2016-08-11 Thread Kengo Hobo
Public bug reported:

Currently, 'get_content_type' method in neutron/wsgi.py checks
specified format by 'in' statement.
Thus, string that is partial matched to 'application/json' is judged as valid.
However, we cannot find valid serializer from the format.

request
===
ubuntu@neutron-ml2:/opt/stack/neutron$ curl -g -i -X GET 
http://172.16.1.29:9696/v2.0/networks -H "X-Auth-Token: $TOKEN" -H 
"Content-type: ppli"
HTTP/1.1 500 Internal Server Error
Content-Length: 114
Content-Type: text/plain; charset=UTF-8
X-Openstack-Request-Id: req-c82ae85b-dbec-49ae-ad1f-d1104c437acd
Date: Fri, 12 Aug 2016 03:18:14 GMT

500 Internal Server Error

The server has either erred or is incapable of performing the requested 
operation.
=

trace in neutron-server
=
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors Traceback 
(most recent call last):
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/oslo_middleware/catch_errors.py", line 
38, in __call__
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors response = 
req.get_response(self.application)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors 
application, catch_exc_info=False)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in 
call_application
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors app_iter = 
application(self.environ, start_response)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors resp = 
self.call_func(req, *args, **self.kwargs)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors return 
self.func(req, *args, **kwargs)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 331, in
 __call__
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors response = 
req.get_response(self._app)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors 
application, catch_exc_info=False)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in 
call_application
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors app_iter = 
application(self.environ, start_response)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors return 
resp(environ, start_response)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors return 
resp(environ, start_response)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 141, in 
__call__
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors response = 
self.app(environ, start_response)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors return 
resp(environ, start_response)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors return 
resp(environ, start_response)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 141, in 
__call__
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors response = 
self.app(environ, start_response)
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2016-08-12 03:18:14.259 13757 ERROR oslo_middleware.catch_errors return 
resp(environ, start_response)
2016-08-12 03:18:14.259 13757 ERROR 

[Yahoo-eng-team] [Bug 1612281] Re: Neutron Linuxbridge jobs failing with 'operation failed: failed to read XML'

2016-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/354143
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=26399c700577f7a98213ec908dd2f478270f494e
Submitter: Jenkins
Branch:master

commit 26399c700577f7a98213ec908dd2f478270f494e
Author: Daniel P. Berrange 
Date:   Thu Aug 11 16:11:01 2016 +0100

network: fix handling of linux-bridge in os-vif conversion

The nova.network.model.Network class uses names
'should_create_{bridge,vlan}' not 'should_provide_{bridge,vlan}'

The bridge_interface attribute should always be set, even if
to None, since None is a valid value.

The vlan attribute is compulsory if should_create_vlan is
set.

Closes-bug: 1612281
Change-Id: I245f560156d596be14ef9181bfb881be9680c166


** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612281

Title:
  Neutron Linuxbridge jobs failing with 'operation failed: failed to
  read XML'

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in os-vif:
  Invalid

Bug description:
  Example: http://logs.openstack.org/64/353664/2/check/gate-tempest-
  dsvm-neutron-
  linuxbridge/591295c/console.html#_2016-08-11_12_57_42_191711

  2016-08-11 12:57:42.191740 | Traceback (most recent call last):
  2016-08-11 12:57:42.191760 |   File "tempest/test.py", line 106, in 
wrapper
  2016-08-11 12:57:42.191779 | return f(self, *func_args, **func_kwargs)
  2016-08-11 12:57:42.191814 |   File 
"tempest/scenario/test_server_advanced_ops.py", line 90, in 
test_server_sequence_suspend_resume
  2016-08-11 12:57:42.191825 | 'ACTIVE')
  2016-08-11 12:57:42.191851 |   File "tempest/common/waiters.py", line 75, 
in wait_for_server_status
  2016-08-11 12:57:42.191865 | server_id=server_id)
  2016-08-11 12:57:42.191905 | tempest.exceptions.BuildErrorException: 
Server 15dcd67e-dd8a-4805-b14b-798c6d2a6e87 failed to build and is in ERROR 
status
  2016-08-11 12:57:42.191942 | Details: {u'message': u'operation failed: 
failed to read XML', u'created': u'2016-08-11T12:52:28Z', u'code': 500}

  In nova-cpu logs, we see the same error:
  http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-
  
linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_55_21_025

  Note that the log contains other suspicious errors, like:

  - privsep unexpected errors: 
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_25_47_284
  - libvirt failing to locate brq: 
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_26_13_346

  Agent logs seems more or less clean.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612466] [NEW] aggregate object file does not define LOG error in Liberty

2016-08-11 Thread linbing
Public bug reported:

the error output in nova-api.log is :
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions 
[req-56dda11e-3041-4fac-8342-bb643643a1c7 e88120bc348c4f3ca37207ef4bcd3b90 
43b2137632ac4ad8b2
df8c0d27f13fb8 - - -] Unexpected exception in API method
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrap
ped
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrappe
r
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/aggregates.py", 
line 169,
 in _remove_host
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in wrapped
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions 
six.reraise(self.type_, self.value, self.tb)
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 72, in wrapped
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 3908, in 
remove_host_from
_aggregate
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 213, in 
wrapper
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions return 
fn(self, *args, **kwargs)
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/objects/aggregate.py", line 165, in 
delete_host
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/objects/aggregate.py", line 64, in 
update_aggre
gate_for_instances
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions NameError: 
global name 'LOG' is not defined
2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions
2016-08-09 14:50:19.148 4532 INFO nova.api.openstack.wsgi 
[req-56dda11e-3041-4fac-8342-bb643643a1c7 e88120bc348c4f3ca37207ef4bcd3b90 
43b2137632ac4ad8b2df8c0d27f13fb8 - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.


so i found that in nova/object/aggregate.py function
update_aggregate_for_instance using LOG.exception to write down message
in nova-api.log when instance.save error and throw exception, bug in
this module does not defined LOG, so it will report  Unexpected API
Error.

** Affects: nova
 Importance: Undecided
 Assignee: linbing (hawkerous)
 Status: Confirmed

** Changed in: nova
 Assignee: (unassigned) => linbing (hawkerous)

** Changed in: nova
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1612466

Title:
  aggregate object file does not define LOG error in Liberty

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  the error output in nova-api.log is :
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions 
[req-56dda11e-3041-4fac-8342-bb643643a1c7 e88120bc348c4f3ca37207ef4bcd3b90 
43b2137632ac4ad8b2
  df8c0d27f13fb8 - - -] Unexpected exception in API method
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrap
  ped
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrappe
  r
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/aggregates.py", 
line 169,
   in _remove_host
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in wrapped
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions 
six.reraise(self.type_, self.value, self.tb)
  2016-08-09 14:50:19.131 4532 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 72, in wrapped
  

[Yahoo-eng-team] [Bug 1611991] Re: [ovs firewall] Port 23 is open on booted vms with only ping/ssh on 22 allowed.

2016-08-11 Thread Jeremy Stanley
What change introduced this bug? Is it present in stable branches too,
or just master?

** Also affects: ossa
   Importance: Undecided
   Status: New

** Information type changed from Public to Public Security

** Changed in: ossa
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611991

Title:
  [ovs firewall] Port 23 is open on booted vms with only ping/ssh on 22
  allowed.

Status in neutron:
  In Progress
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  Seen on master devstack, ubuntu xenial.

  Steps to reproduce:

  1. Enable ovs firewall in /etc/neutron/plugins/ml2/ml2.conf

  [securitygroup]
  firewall_driver = openvswitch

  2. Create a security group with icmp, tcp to 22.

  3. Boot a VM, assign a floating ip.

  4. Check that port 23 can be accessed via tcp (telnet, nc, etc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1611991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612456] Re: Neutron returns HTTP500 when deleting subnet on the gate

2016-08-11 Thread Ken'ichi Ohmichi
*** This bug is a duplicate of bug 1594376 ***
https://bugs.launchpad.net/bugs/1594376

Liberty contains this problem on
https://github.com/openstack/neutron/blob/stable/liberty/neutron/plugins/ml2/plugin.py#L977

Patches are https://review.openstack.org/#/c/354445/ and
https://review.openstack.org/#/c/35/

** Project changed: tempest => neutron

** This bug has been marked a duplicate of bug 1594376
   Delete subnet fails with "ObjectDeletedError: Instance '' has been deleted, or its row is otherwise not present."

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612456

Title:
  Neutron returns HTTP500 when deleting subnet on the gate

Status in neutron:
  New

Bug description:
  test_dhcpv6_64_subnets of gate-tempest-dsvm-neutron-full-ubuntu-
  trusty-liberty failed with an internal server error:

  http://logs.openstack.org/39/351939/3/check/gate-tempest-dsvm-neutron-
  full-ubuntu-trusty-liberty/a78ddac

  
  Traceback (most recent call last):
File "tempest/api/network/test_dhcp_ipv6.py", line 249, in 
test_dhcpv6_64_subnets
  self._clean_network()
File "tempest/api/network/test_dhcp_ipv6.py", line 81, in _clean_network
  self.subnets_client.delete_subnet(subnet['id'])
File "tempest/lib/services/network/subnets_client.py", line 49, in 
delete_subnet
  return self.delete_resource(uri)
File "tempest/lib/services/network/base.py", line 41, in delete_resource
  resp, body = self.delete(req_uri)
File "tempest/lib/common/rest_client.py", line 304, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "tempest/lib/common/rest_client.py", line 667, in request
  resp, resp_body)
File "tempest/lib/common/rest_client.py", line 831, in _error_checker
  message=message)
  tempest.lib.exceptions.ServerFault: Got server fault
  Details: Request Failed: internal server error while processing your request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612456] [NEW] Neutron returns HTTP500 when deleting subnet on the gate

2016-08-11 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

test_dhcpv6_64_subnets of gate-tempest-dsvm-neutron-full-ubuntu-trusty-
liberty failed with an internal server error:

http://logs.openstack.org/39/351939/3/check/gate-tempest-dsvm-neutron-
full-ubuntu-trusty-liberty/a78ddac


Traceback (most recent call last):
  File "tempest/api/network/test_dhcp_ipv6.py", line 249, in 
test_dhcpv6_64_subnets
self._clean_network()
  File "tempest/api/network/test_dhcp_ipv6.py", line 81, in _clean_network
self.subnets_client.delete_subnet(subnet['id'])
  File "tempest/lib/services/network/subnets_client.py", line 49, in 
delete_subnet
return self.delete_resource(uri)
  File "tempest/lib/services/network/base.py", line 41, in delete_resource
resp, body = self.delete(req_uri)
  File "tempest/lib/common/rest_client.py", line 304, in delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "tempest/lib/common/rest_client.py", line 667, in request
resp, resp_body)
  File "tempest/lib/common/rest_client.py", line 831, in _error_checker
message=message)
tempest.lib.exceptions.ServerFault: Got server fault
Details: Request Failed: internal server error while processing your request.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Neutron returns HTTP500 when deleting subnet on the gate
https://bugs.launchpad.net/bugs/1612456
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612281] Re: Neutron Linuxbridge jobs failing with 'operation failed: failed to read XML'

2016-08-11 Thread Daniel Berrange
Fixed in Nova by https://review.openstack.org/#/c/354143/

** Changed in: nova
   Importance: Undecided => Critical

** Changed in: nova
   Status: New => Fix Committed

** Changed in: neutron
   Status: New => Invalid

** Changed in: os-vif
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612281

Title:
  Neutron Linuxbridge jobs failing with 'operation failed: failed to
  read XML'

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Fix Committed
Status in os-vif:
  Invalid

Bug description:
  Example: http://logs.openstack.org/64/353664/2/check/gate-tempest-
  dsvm-neutron-
  linuxbridge/591295c/console.html#_2016-08-11_12_57_42_191711

  2016-08-11 12:57:42.191740 | Traceback (most recent call last):
  2016-08-11 12:57:42.191760 |   File "tempest/test.py", line 106, in 
wrapper
  2016-08-11 12:57:42.191779 | return f(self, *func_args, **func_kwargs)
  2016-08-11 12:57:42.191814 |   File 
"tempest/scenario/test_server_advanced_ops.py", line 90, in 
test_server_sequence_suspend_resume
  2016-08-11 12:57:42.191825 | 'ACTIVE')
  2016-08-11 12:57:42.191851 |   File "tempest/common/waiters.py", line 75, 
in wait_for_server_status
  2016-08-11 12:57:42.191865 | server_id=server_id)
  2016-08-11 12:57:42.191905 | tempest.exceptions.BuildErrorException: 
Server 15dcd67e-dd8a-4805-b14b-798c6d2a6e87 failed to build and is in ERROR 
status
  2016-08-11 12:57:42.191942 | Details: {u'message': u'operation failed: 
failed to read XML', u'created': u'2016-08-11T12:52:28Z', u'code': 500}

  In nova-cpu logs, we see the same error:
  http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-
  
linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_55_21_025

  Note that the log contains other suspicious errors, like:

  - privsep unexpected errors: 
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_25_47_284
  - libvirt failing to locate brq: 
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_26_13_346

  Agent logs seems more or less clean.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612433] Re: neutron-db-manage autogenerate is generating empty upgrades

2016-08-11 Thread Shashank Hegde
** Also affects: networking-arista
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612433

Title:
  neutron-db-manage autogenerate is generating empty upgrades

Status in networking-arista:
  New
Status in neutron:
  Confirmed

Bug description:
  The alembic autogenerate wrapper,

neutron-db-manage revision -m "description" --[contract|expand]

  is no longer collecting model/migration diffs and is generating empty
  upgrade scripts.

  Not sure when this broke.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-arista/+bug/1612433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495440] Re: bulk delete improvements

2016-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/263609
Committed: 
https://git.openstack.org/cgit/openstack/python-neutronclient/commit/?id=b16c9a8d3d70fe2ec69ab44b740011dc2b2bc097
Submitter: Jenkins
Branch:master

commit b16c9a8d3d70fe2ec69ab44b740011dc2b2bc097
Author: reedip 
Date:   Tue Jan 5 16:32:36 2016 +0900

Add support for Bulk Delete in NeutronClient

The following patch adds support for BulkDelete in NeutronClient.
Currently, the core existing Neutron CLIs are going to support
Bulk Deletion in NeutronClient. However, if any extension does not
require Bulk Delete, it can be disabled by updating the
class attribute 'bulk_delete'.

DocImpact
Depends-On: Ib23d1e53947b5dffcff8db0dde77cae0a0b31243
Change-Id: I3b8a05698625baad3906784e3ecffb0f0242d660
Closes-Bug: #1495440


** Changed in: python-neutronclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495440

Title:
  bulk delete improvements

Status in neutron:
  Won't Fix
Status in python-neutronclient:
  Fix Released

Bug description:
  While trying to delete multiple firewall rule using CLI by passing
  firewall rule multiple times, it deletes only the first firewall Rule
  id

  stack@hdp-001:~$ neutron
  (neutron) firewall-rule-list
  
+--+-++-+-+
  | id   | name| firewall_policy_id 
| summary | enabled |
  
+--+-++-+-+
  | 8c4ea5c6-a6e4-43ab-a503-0a2265119238 | test1491637 |
| TCP,| True|
  |  | |
|  source: none(none),| |
  |  | |
|  dest: none(none),  | |
  |  | |
|  allow  | |
  | b8c1c061-8f92-482d-94d3-678f42c7ccd7 | rayrafw2|
| ICMP,   | True|
  |  | |
|  source: none(none),| |
  |  | |
|  dest: none(none),  | |
  |  | |
|  allow  | |
  | ba35dde7-8b07-4ba1-8338-496962c83dbc | testrule1491637 |
| UDP,| True|
  |  | |
|  source: 10.25.10.2/32(80), | |
  |  | |
|  dest: none(none),  | |
  |  | |
|  deny   | |
  
+--+-++-+-+
  (neutron) firewall-rule-delete 8c4ea5c6-a6e4-43ab-a503-0a2265119238 
b8c1c061-8f92-482d-94d3-678f42c7ccd7
  Deleted firewall_rule: 8c4ea5c6-a6e4-43ab-a503-0a2265119238
  (neutron) firewall-rule-list
  
+--+-++-+-+
  | id   | name| firewall_policy_id 
| summary | enabled |
  
+--+-++-+-+
  | b8c1c061-8f92-482d-94d3-678f42c7ccd7 | rayrafw2|
| ICMP,   | True|
  |  | |
|  source: none(none),| |
  |  | |
|  dest: none(none),  | |
  |  | |
|  allow  | |
  | ba35dde7-8b07-4ba1-8338-496962c83dbc | testrule1491637 |
| UDP,| True|
  |  | |
|  source: 10.25.10.2/32(80), | |
  |  | |
|  dest: none(none),  | |
  |  | |
|  deny   | |
  

[Yahoo-eng-team] [Bug 1612433] [NEW] neutron-db-manage autogenerate is generating empty upgrades

2016-08-11 Thread Henry Gessau
Public bug reported:

The alembic autogenerate wrapper,

  neutron-db-manage revision -m "description" --[contract|expand]

is no longer collecting model/migration diffs and is generating empty
upgrade scripts.

Not sure when this broke.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: db

** Tags added: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612433

Title:
  neutron-db-manage autogenerate is generating empty upgrades

Status in neutron:
  Confirmed

Bug description:
  The alembic autogenerate wrapper,

neutron-db-manage revision -m "description" --[contract|expand]

  is no longer collecting model/migration diffs and is generating empty
  upgrade scripts.

  Not sure when this broke.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369465] Re: [SRU] nova resize doesn't resize(extend) rbd disk files when using rbd disk backend

2016-08-11 Thread James Page
This bug was fixed in the package nova - 2:12.0.4-0ubuntu1~cloud1
---

 nova (2:12.0.4-0ubuntu1~cloud1) trusty-liberty; urgency=medium
 .
   * Backport fix for image resize bug (LP: #1369465)
 - d/p/libvirt-Split-out-resize_image-logic-from-create_ima.patch


** Changed in: cloud-archive/liberty
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369465

Title:
  [SRU] nova resize doesn't resize(extend) rbd disk files when using rbd
  disk backend

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Wily:
  Won't Fix

Bug description:
  [Impact]

Instance resize does not work if the target host has a cached copy of
the root disk. The resize will silently fail but be displayed as
successful in Nova.

  [Test Case]

1 deploy nova-compute with RBDImageBackend enabled
2 boot an instance from a QCOW2 image (to guarantee it gets downloaded for 
reformat prior to re-upload to ceph)
3 nova resize using flavor with larger root disk
4 wait for instance resize migration to complete
5 verify root disk actually resized by checking /proc/partitions in vm
6 do nova resize-confirm
7 repeat steps 3-6

  [Regression Potential]

   * None

  == original description below ==

  tested with nova trunk commit eb860c2f219b79e4f4c5984415ee433145197570

  Configured Nova to use rbd disk backend

  nova.conf

  [libvirt]
  images_type=rbd

  instances booted successfully and instance disks are in rbd pools,
  when perform a nova resize  to an existing instance,  memory and CPU
  changed to be new flavors but instance disks size doesn't change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1369465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612403] [NEW] Cannot filter OVSDB columns, only tables

2016-08-11 Thread Omer Anson
Public bug reported:

The current ovsdb connection class
(neutron.agent.ovsdb.native.connection.Connection) allows filtering
OVSDB tables, but not columns. Filtering columns may allow a performance
gain when only specific columns in a table are accessed.

Specifically, this is a feature we are trying to use in Dragonflow[1],
in class DFConnection and table columns
ovsdb_monitor_table_filter_default.

[1]
https://github.com/openstack/dragonflow/blob/master/dragonflow/db/drivers/ovsdb_vswitch_impl.py

** Affects: neutron
 Importance: Undecided
 Assignee: Omer Anson (omer-anson)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => Omer Anson (omer-anson)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612403

Title:
  Cannot filter OVSDB columns, only tables

Status in neutron:
  New

Bug description:
  The current ovsdb connection class
  (neutron.agent.ovsdb.native.connection.Connection) allows filtering
  OVSDB tables, but not columns. Filtering columns may allow a
  performance gain when only specific columns in a table are accessed.

  Specifically, this is a feature we are trying to use in Dragonflow[1],
  in class DFConnection and table columns
  ovsdb_monitor_table_filter_default.

  [1]
  
https://github.com/openstack/dragonflow/blob/master/dragonflow/db/drivers/ovsdb_vswitch_impl.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612353] [NEW] [master] instance launch failing on ESXi and KVM

2016-08-11 Thread Prashant Shetty
Public bug reported:

setup:

1 controller
2 network nodes(q-dhcp)
1 ESXi nova compute
3 KVM ubuntu compute

Instance cirros launch is failing on both ESXi and KVM.

vmware@cntr1:~$ nova service-list 
++--+---+--+-+---++-+
| Id | Binary   | Host  | Zone | Status  | State | Updated_at   
  | Disabled Reason |
++--+---+--+-+---++-+
| 5  | nova-conductor   | cntr1 | internal | enabled | up| 
2016-08-11T16:36:49.00 | -   |
| 7  | nova-compute | esx-comp1 | nova | enabled | up| 
2016-08-11T16:36:41.00 | -   |
| 8  | nova-compute | kvm-comp3 | nova | enabled | up| 
2016-08-11T16:36:41.00 | -   |
| 9  | nova-compute | kvm-comp2 | nova | enabled | up| 
2016-08-11T16:36:41.00 | -   |
| 10 | nova-compute | kvm-comp1 | nova | enabled | up| 
2016-08-11T16:36:41.00 | -   |
| 11 | nova-scheduler   | cntr1 | internal | enabled | up| 
2016-08-11T16:36:46.00 | -   |
| 12 | nova-consoleauth | cntr1 | internal | enabled | up| 
2016-08-11T16:36:46.00 | -   |
++--+---+--+-+---++-+
vmware@cntr1:~$ 
 
n-cpu.log:2016-08-11 21:56:06.902 31615 DEBUG oslo_concurrency.lockutils 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] Lock 
"d7316f88-1cce-471c-90ac-3e8d9f405cbd" acquired by 
"nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
n-cpu.log:2016-08-11 21:56:06.930 31615 DEBUG nova.compute.manager 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Starting instance... 
_do_build_and_run_instance /opt/stack/nova/nova/compute/manager.py:1749
n-cpu.log:2016-08-11 21:56:07.035 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Attempting claim: memory 512 MB, disk 1 
GB, vcpus 1 CPU
n-cpu.log:2016-08-11 21:56:07.035 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Total memory: 128918 MB, used: 512.00 MB
n-cpu.log:2016-08-11 21:56:07.036 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] memory limit: 193377.00 MB, free: 
192865.00 MB
n-cpu.log:2016-08-11 21:56:07.036 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Total disk: 226 GB, used: 0.00 GB
n-cpu.log:2016-08-11 21:56:07.036 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] disk limit: 226.00 GB, free: 226.00 GB
n-cpu.log:2016-08-11 21:56:07.036 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Total vcpu: 16 VCPU, used: 0.00 VCPU
n-cpu.log:2016-08-11 21:56:07.037 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] vcpu limit not specified, defaulting to 
unlimited
n-cpu.log:2016-08-11 21:56:07.037 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Claim successful
n-cpu.log:2016-08-11 21:56:07.278 31615 DEBUG nova.compute.manager 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Start building networks asynchronously 
for instance. _build_resources /opt/stack/nova/nova/compute/manager.py:2016
n-cpu.log:2016-08-11 21:56:07.400 31615 DEBUG nova.compute.manager 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Allocating IP information in the 
background. _allocate_network_async /opt/stack/nova/nova/compute/manager.py:1383
n-cpu.log:2016-08-11 21:56:07.401 31615 DEBUG oslo_concurrency.lockutils 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] Acquired semaphore 
"refresh_cache-d7316f88-1cce-471c-90ac-3e8d9f405cbd" lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
n-cpu.log:2016-08-11 21:56:07.417 31615 DEBUG nova.compute.manager 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Start building block device mappings for 
instance. _build_resources /opt/stack/nova/nova/compute/manager.py:2042
n-cpu.log:2016-08-11 21:56:07.584 31615 DEBUG nova.compute.manager 

[Yahoo-eng-team] [Bug 1612341] [NEW] cpu thread pinning flavor metadef

2016-08-11 Thread Stephen Finucane
Public bug reported:

Similar to #1476696. Flavor namespace in glance metadefs should be
extended with "hw:cpu_thread_policy" or "hw_cpu_thread_policy" property
to be able to configure Nova::Flavor, Glance::Image,
Cinder::Volume(image) using that property

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1612341

Title:
  cpu thread pinning flavor metadef

Status in Glance:
  New

Bug description:
  Similar to #1476696. Flavor namespace in glance metadefs should be
  extended with "hw:cpu_thread_policy" or "hw_cpu_thread_policy"
  property to be able to configure Nova::Flavor, Glance::Image,
  Cinder::Volume(image) using that property

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1612341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611237] Re: Restart neutron-openvswitch-agent get ERROR "Switch connection timeout"

2016-08-11 Thread Brian Haley
@Yujie - since it is the RYU code complaining you will need to work with
them, that code is not in the neutron tree.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611237

Title:
  Restart neutron-openvswitch-agent get ERROR "Switch connection
  timeout"

Status in neutron:
  Invalid

Bug description:
  Environment: devstack  master, ubuntu 14.04

  After ./stack.sh finished, kill the neutron-openvswitch-agent process
  and then start it by /usr/bin/python /usr/local/bin/neutron-
  openvswitch-agent --config-file /etc/neutron/neutron.conf --config-
  file /etc/neutron/plugins/ml2/ml2_conf.ini

  The log shows :
  2016-08-08 11:02:06.346 ERROR ryu.lib.hub [-] hub: uncaught exception: 
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 54, in 
_launch
  return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/ryu/controller/controller.py", 
line 97, in __call__
  self.ofp_ssl_listen_port)
File "/usr/local/lib/python2.7/dist-packages/ryu/controller/controller.py", 
line 120, in server_loop
  datapath_connection_factory)
File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 117, in 
__init__
  self.server = eventlet.listen(listen_info)
File "/usr/local/lib/python2.7/dist-packages/eventlet/convenience.py", line 
43, in listen
  sock.bind(addr)
File "/usr/lib/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
  error: [Errno 98] Address already in use

  and
  ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[-] Switch connection timeout

  In kilo I could start ovs-agent in this way correctly, I do not know
  it is right to start ovs-agent in master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1611237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487477] Re: Mess in live-migration compute-manager and drivers code

2016-08-11 Thread Markus Zoeller (markus_z)
As discussed with Timofey in IRC #nova, this needs to be driven most
likely by a blueprint.

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Low => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487477

Title:
  Mess in live-migration compute-manager and drivers code

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  There is _live_migration_cleanup_flags method in compute's manager class 
which should decide whether it's needed to make cleanup after live-migration is 
done or not. It accepts 2 params, from doc: 
   :param block_migration: if true, it was a block migration
   :param migrate_data: implementation specific data
  The problem is that current compute's manager code is libvirt-specific.
  It operates values in migrate_data dictionary that valid only for libvirt 
driver implementation. 
  This doesn't cause any bug yet because other drivers doesn't implement 
cleanup method at all. 
  When anyone decide to implement this live-migration starts to fail. There is 
no valid ci job to verify that. 

  live_migration_cleanup_flags - should become hypervisor specific. and
  we should move it from compute manager to drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487477/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494207] Re: console proxies options in [DEFAULT] group are confusing

2016-08-11 Thread Markus Zoeller (markus_z)
This bug report doesn't describe a failure in the behavior of Nova.
It's a personal todo item which doesn't need the overhead of a bug
report. Because of this, I'm closing this report as invalid. This
shouldn't stop you though to do your item and push it as a review.

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Low => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1494207

Title:
  console proxies options in [DEFAULT] group are confusing

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Now config options of different consoles using baseproxy reside in
  [DEFAULT] group, which is very confusing given the fact how they are
  named, e.g.:

  cfg.StrOpt('cert',
     default='self.pem',
     help='SSL certificate file'),
  cfg.StrOpt('key',
     help='SSL key file (if separate from cert)'),

  one would probably expect these options to set SSL key/cert for other
  places in Nova as well (e.g. API), but those are used solely in
  console proxies.

  We could probably give these options their own group in the config and
  use deprecate_name/deprecate_group for backwards compatibility with
  existing config files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1494207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612313] [NEW] maas datasource needs support for vendor-data

2016-08-11 Thread Scott Moser
Public bug reported:

maas datasource does not support vendor-data.
We would like to take advantage of vendordata in maas, and thus cloud-init 
needs it.

** Affects: cloud-init
 Importance: Medium
 Status: Confirmed

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1612313

Title:
  maas datasource needs support for vendor-data

Status in cloud-init:
  Confirmed

Bug description:
  maas datasource does not support vendor-data.
  We would like to take advantage of vendordata in maas, and thus cloud-init 
needs it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1612313/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612281] Re: Neutron Linuxbridge jobs failing with 'operation failed: failed to read XML'

2016-08-11 Thread Henry Gessau
** Also affects: os-vif
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612281

Title:
  Neutron Linuxbridge jobs failing with 'operation failed: failed to
  read XML'

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in os-vif:
  New

Bug description:
  Example: http://logs.openstack.org/64/353664/2/check/gate-tempest-
  dsvm-neutron-
  linuxbridge/591295c/console.html#_2016-08-11_12_57_42_191711

  2016-08-11 12:57:42.191740 | Traceback (most recent call last):
  2016-08-11 12:57:42.191760 |   File "tempest/test.py", line 106, in 
wrapper
  2016-08-11 12:57:42.191779 | return f(self, *func_args, **func_kwargs)
  2016-08-11 12:57:42.191814 |   File 
"tempest/scenario/test_server_advanced_ops.py", line 90, in 
test_server_sequence_suspend_resume
  2016-08-11 12:57:42.191825 | 'ACTIVE')
  2016-08-11 12:57:42.191851 |   File "tempest/common/waiters.py", line 75, 
in wait_for_server_status
  2016-08-11 12:57:42.191865 | server_id=server_id)
  2016-08-11 12:57:42.191905 | tempest.exceptions.BuildErrorException: 
Server 15dcd67e-dd8a-4805-b14b-798c6d2a6e87 failed to build and is in ERROR 
status
  2016-08-11 12:57:42.191942 | Details: {u'message': u'operation failed: 
failed to read XML', u'created': u'2016-08-11T12:52:28Z', u'code': 500}

  In nova-cpu logs, we see the same error:
  http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-
  
linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_55_21_025

  Note that the log contains other suspicious errors, like:

  - privsep unexpected errors: 
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_25_47_284
  - libvirt failing to locate brq: 
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_26_13_346

  Agent logs seems more or less clean.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604428] Re: NoSuchOptError: no such option in group neutron: auth_plugin

2016-08-11 Thread Markus Zoeller (markus_z)
That was a Mitaka bug only as the bug in Newton got fixed with bug
1574988

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604428

Title:
  NoSuchOptError: no such option in group neutron: auth_plugin

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Im' runnign openstack mitaka on a 3 node installation (controller,
  compute and network).  I installed it first under ubuntu 14.04  and
  later under ubuntu 16.04, but both have the same error when I try to
  launch an instance.

  The error I get is the same using horizon or the command "nova boot
  --image cirros --flavor 1 --nic net-name=test erste".

  Errormessage from the command line is 
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-2c2bb960-feb8-45ed-a3d1-65b8833d9228)

  I'm using the versions below:
   dpkg -l | grep nova
  ii  nova-api 2:13.0.0-0ubuntu5
 all  OpenStack Compute - API frontend
  ii  nova-common  2:13.0.0-0ubuntu5
 all  OpenStack Compute - common files
  ii  nova-conductor   2:13.0.0-0ubuntu5
 all  OpenStack Compute - conductor service
  ii  nova-consoleauth 2:13.0.0-0ubuntu5
 all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy  2:13.0.0-0ubuntu5
 all  OpenStack Compute - NoVNC proxy
  ii  nova-scheduler   2:13.0.0-0ubuntu5
 all  OpenStack Compute - virtual machine scheduler
  ii  python-nova  2:13.0.0-0ubuntu5
 all  OpenStack Compute Python libraries
  ii  python-novaclient2:3.3.1-2
 all  client library for OpenStack Compute API - Python 2.7

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1604428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612281] [NEW] Neutron Linuxbridge jobs failing with 'operation failed: failed to read XML'

2016-08-11 Thread Ihar Hrachyshka
Public bug reported:

Example: http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-
neutron-linuxbridge/591295c/console.html#_2016-08-11_12_57_42_191711

2016-08-11 12:57:42.191740 | Traceback (most recent call last):
2016-08-11 12:57:42.191760 |   File "tempest/test.py", line 106, in wrapper
2016-08-11 12:57:42.191779 | return f(self, *func_args, **func_kwargs)
2016-08-11 12:57:42.191814 |   File 
"tempest/scenario/test_server_advanced_ops.py", line 90, in 
test_server_sequence_suspend_resume
2016-08-11 12:57:42.191825 | 'ACTIVE')
2016-08-11 12:57:42.191851 |   File "tempest/common/waiters.py", line 75, 
in wait_for_server_status
2016-08-11 12:57:42.191865 | server_id=server_id)
2016-08-11 12:57:42.191905 | tempest.exceptions.BuildErrorException: Server 
15dcd67e-dd8a-4805-b14b-798c6d2a6e87 failed to build and is in ERROR status
2016-08-11 12:57:42.191942 | Details: {u'message': u'operation failed: 
failed to read XML', u'created': u'2016-08-11T12:52:28Z', u'code': 500}

In nova-cpu logs, we see the same error:
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-
linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_55_21_025

Note that the log contains other suspicious errors, like:

- privsep unexpected errors: 
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_25_47_284
- libvirt failing to locate brq: 
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_26_13_346

Agent logs seems more or less clean.

** Affects: neutron
 Importance: Critical
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure linuxbridge

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612281

Title:
  Neutron Linuxbridge jobs failing with 'operation failed: failed to
  read XML'

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Example: http://logs.openstack.org/64/353664/2/check/gate-tempest-
  dsvm-neutron-
  linuxbridge/591295c/console.html#_2016-08-11_12_57_42_191711

  2016-08-11 12:57:42.191740 | Traceback (most recent call last):
  2016-08-11 12:57:42.191760 |   File "tempest/test.py", line 106, in 
wrapper
  2016-08-11 12:57:42.191779 | return f(self, *func_args, **func_kwargs)
  2016-08-11 12:57:42.191814 |   File 
"tempest/scenario/test_server_advanced_ops.py", line 90, in 
test_server_sequence_suspend_resume
  2016-08-11 12:57:42.191825 | 'ACTIVE')
  2016-08-11 12:57:42.191851 |   File "tempest/common/waiters.py", line 75, 
in wait_for_server_status
  2016-08-11 12:57:42.191865 | server_id=server_id)
  2016-08-11 12:57:42.191905 | tempest.exceptions.BuildErrorException: 
Server 15dcd67e-dd8a-4805-b14b-798c6d2a6e87 failed to build and is in ERROR 
status
  2016-08-11 12:57:42.191942 | Details: {u'message': u'operation failed: 
failed to read XML', u'created': u'2016-08-11T12:52:28Z', u'code': 500}

  In nova-cpu logs, we see the same error:
  http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-
  
linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_55_21_025

  Note that the log contains other suspicious errors, like:

  - privsep unexpected errors: 
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_25_47_284
  - libvirt failing to locate brq: 
http://logs.openstack.org/64/353664/2/check/gate-tempest-dsvm-neutron-linuxbridge/591295c/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-08-11_12_26_13_346

  Agent logs seems more or less clean.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612279] [NEW] Unexptected API exception when launching instance

2016-08-11 Thread Sam
Public bug reported:

Description:

Attempting to launch an instance is giving an unexpected API error. 
Strangely, this is happening only with our qcow2 images.  We have one vmdk 
image 
and that worked fine.  However, trying to convert the qcow2 image to vmdk and
importing it into glance gave the same error.

Steps to Reproduce:

1. Attempt to launch instance from Horizon or command line

Expected result:

Successful instance launch.

Actual result:

Horizon displays an "Unable to create the server" message.
Nova shows an unexpected api error.


Environment:

OpenStack Mitaka release running on centos 7 using KVM with LVM storage
backend and Neutron for networking.

Logs and configuration:

I've attached nova-api.log to show the error message.

Additional information:

Yesterday it was working just fine.  We attempted to install Designate
and uninstalled it,  at some point we created a keystone endpoint and
then removed it as well.  Aside from that, nothing in our configuration
should have changed.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova-api.log"
   
https://bugs.launchpad.net/bugs/1612279/+attachment/4719117/+files/nova-api.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1612279

Title:
  Unexptected API exception when launching instance

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:

  Attempting to launch an instance is giving an unexpected API error. 
  Strangely, this is happening only with our qcow2 images.  We have one vmdk 
image 
  and that worked fine.  However, trying to convert the qcow2 image to vmdk and
  importing it into glance gave the same error.

  Steps to Reproduce:

  1. Attempt to launch instance from Horizon or command line

  Expected result:

  Successful instance launch.

  Actual result:

  Horizon displays an "Unable to create the server" message.
  Nova shows an unexpected api error.

  
  Environment:

  OpenStack Mitaka release running on centos 7 using KVM with LVM
  storage backend and Neutron for networking.

  Logs and configuration:

  I've attached nova-api.log to show the error message.

  Additional information:

  Yesterday it was working just fine.  We attempted to install Designate
  and uninstalled it,  at some point we created a keystone endpoint and
  then removed it as well.  Aside from that, nothing in our
  configuration should have changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1612279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612119] Re: Unrecognized attribute(s) 'qos_policy_id'

2016-08-11 Thread Miguel Angel Ajo
That looks more like a configuration error, may be you didn't configure
the qos service, or the qos ml2 extension. Otherwise you'd see the
qos_policy_id in the net-create report.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612119

Title:
  Unrecognized attribute(s) 'qos_policy_id'

Status in neutron:
  Invalid

Bug description:
  centos7
  openstack mitaka

  [root@TEST ~(keystone_admin)]# neutron net-create --prefix 192.168.0.0/24  
network
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2016-08-11T08:13:22  |
  | description   |  |
  | id| d20f2f83-059e-426d-8a93-fbbe95a3d53a |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1450 |
  | name  | network  |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 13   |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tags  |  |
  | tenant_id | 985f84783f7c401288e11e0eb84a520e |
  | updated_at| 2016-08-11T08:13:22  |
  +---+--+
  [root@TEST ~(keystone_admin)]#
  [root@TEST ~(keystone_admin)]#
  [root@TEST ~(keystone_admin)]#
  [root@TEST ~(keystone_admin)]# neutron net-update --no-qos-policy network
  Unrecognized attribute(s) 'qos_policy_id'
  Neutron server returns request_ids: 
['req-fb5211d6-a459-48d7-af0c-48deb3f21487']
  [root@TEST ~(keystone_admin)]#

  /var/log/neutron/server.log

  2016-08-11 16:14:48.558 7238 INFO neutron.api.v2.resource [req-
  fb5211d6-a459-48d7-af0c-48deb3f21487 17770d50d8e643c6a570e11383770d5a
  985f84783f7c401288e11e0eb84a520e - - -] update failed (client error):
  Unrecognized attribute(s) 'qos_policy_id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612231] [NEW] providing network-interfaces via meta-data broken in NoCloud

2016-08-11 Thread Scott Moser
Public bug reported:

Originally reported after bug 1577982 was fixed, there is an issue
providing network configuration to the NoCloud datasource via the meta-
data key 'network-interfaces'.

For example:
$ cd /var/lib/cloud/seed/nocloud-net
$ cat meta-data
instance-id: 1470899540
local-hostname: soscared
network-interfaces: |
auto eth0
iface eth0 inet static
hwaddr 00:16:3e:70:e1:04
address 103.225.10.12
netmask 255.255.255.0
gateway 103.225.10.1
dns-servers 8.8.8.8


It should be noted that without providing 'hwaddr' above, you'll be relying on 
kernel network or systemd network device naming to have the device named 
'eth0'.  If it is not named 'eth0', then configuration wont work.

So the two options to do that right are:
 a.) provide 'hwaddr' as shown above
 b.) provide 'net.ifnames=0' on kernel command line 
(https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/).

(Note, when run in lxc, you will get eth0 names consistently and you do
not need the hwaddr or kernel command line).

** Affects: cloud-init
 Importance: Medium
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1612231

Title:
  providing network-interfaces via meta-data broken in NoCloud

Status in cloud-init:
  Confirmed

Bug description:
  Originally reported after bug 1577982 was fixed, there is an issue
  providing network configuration to the NoCloud datasource via the
  meta-data key 'network-interfaces'.

  For example:
  $ cd /var/lib/cloud/seed/nocloud-net
  $ cat meta-data
  instance-id: 1470899540
  local-hostname: soscared
  network-interfaces: |
  auto eth0
  iface eth0 inet static
  hwaddr 00:16:3e:70:e1:04
  address 103.225.10.12
  netmask 255.255.255.0
  gateway 103.225.10.1
  dns-servers 8.8.8.8

  
  It should be noted that without providing 'hwaddr' above, you'll be relying 
on kernel network or systemd network device naming to have the device named 
'eth0'.  If it is not named 'eth0', then configuration wont work.

  So the two options to do that right are:
   a.) provide 'hwaddr' as shown above
   b.) provide 'net.ifnames=0' on kernel command line 
(https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/).

  (Note, when run in lxc, you will get eth0 names consistently and you
  do not need the hwaddr or kernel command line).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1612231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612222] [NEW] quota unit test failure

2016-08-11 Thread Kevin Benton
Public bug reported:

spotted failure in a unit test unrelated to the patch. appears to be a
result of leaked resources between the tests.

===
FAIL: neutron.tests.unit.quota.test_resource_registry.TestAuxiliary
Functions.test_set_resources_dirty
tags: worker-0
---
---
Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "neutron/tests/unit/quota/test_resource_registry.py", line 9
0, in setUp
self.registry.unregister_resources()
  File "neutron/quota/resource_registry.py", line 239, in unregiste
r_resources
res.unregister_events()
AttributeError: 'CountableResource' object has no attribute 'unregi
ster_events'

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/161

Title:
  quota unit test failure

Status in neutron:
  In Progress

Bug description:
  spotted failure in a unit test unrelated to the patch. appears to be a
  result of leaked resources between the tests.

  ===
  FAIL: neutron.tests.unit.quota.test_resource_registry.TestAuxiliary
  Functions.test_set_resources_dirty
  tags: worker-0
  ---
  ---
  Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "neutron/tests/unit/quota/test_resource_registry.py", line 9
  0, in setUp
  self.registry.unregister_resources()
File "neutron/quota/resource_registry.py", line 239, in unregiste
  r_resources
  res.unregister_events()
  AttributeError: 'CountableResource' object has no attribute 'unregi
  ster_events'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589457] Re: live-migration fails for volume-backed instances with config-drive type vfat

2016-08-11 Thread Pawel Koniszewski
Talked with danpb on IRC and looks like we can use block live migration
in such case, so #1 and #2 are invalid too.

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589457

Title:
  live-migration fails for volume-backed  instances with config-drive
  type vfat

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===

  Volume-backed instances fails to migrate when config-drive is enabled(even 
with vfat). 
  Migration fails with exception.InvalidSharedStorage during 
check_can_live_migrate_source method execution 
https://github.com/openstack/nova/blob/545d8d8666389f33601b0b003dec844004694919/nova/virt/libvirt/driver.py#L5388

  The root cause:
  
https://github.com/openstack/nova/blob/545d8d8666389f33601b0b003dec844004694919/nova/virt/libvirt/driver.py#L5344
 - flags is calculated incorrectly.

  
  Steps to reproduce
  ==
  1. use vfat as config drive format, no shared storage like nfs;
  2. boot instance from volume;
  3. try to live-migrate instance;

  Expected result
  ===
  instance migrated successfully

  Actual result
  =
  live-migration is not even started:
  root@node-1:~# nova live-migration server00 node-4.test.domain.local
  ERROR (BadRequest): Migration pre-check error: Cannot block migrate instance 
f477e6da-4a04-492b-b7a6-e57b7823d301 with mapped volumes. Selective block 
device migration feature requires libvirt version 1.2.17 (HTTP 400) 
(Request-ID: req-4e0fce45-8b7c-43c0-90e7-cc929d2d60a1)

  Environment
  ===

  multinode env, without file based shared storages like NFS.
  driver libvirt/kvm
  openstack branch stable/mitaka,
  should also be valid for master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1589457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612192] [NEW] L3 DVR: Unable to complete operation on subnet

2016-08-11 Thread John Schwarz
Public bug reported:

There is a new gate failure that can be found using the following
logstash query:

message:"One or more ports have an IP allocation from this subnet" &&
filename:"console.html" && build_queue:"gate"

This seems to be specific to DVR jobs and is separate from [1] (see
comment #7 on that bug report).

[1]: https://bugs.launchpad.net/neutron/+bug/1562878

** Affects: neutron
 Importance: Critical
 Status: New


** Tags: gate-failure l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612192

Title:
  L3 DVR: Unable to complete operation on subnet

Status in neutron:
  New

Bug description:
  There is a new gate failure that can be found using the following
  logstash query:

  message:"One or more ports have an IP allocation from this subnet" &&
  filename:"console.html" && build_queue:"gate"

  This seems to be specific to DVR jobs and is separate from [1] (see
  comment #7 on that bug report).

  [1]: https://bugs.launchpad.net/neutron/+bug/1562878

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612186] [NEW] failed to create flavor router

2016-08-11 Thread yong sheng gong
Public bug reported:

[gongysh@fedora23 devstack]$ neutron router-create 
--flavor-id=5c4016b6-c5ef-4b70-891d-741d376fa96f testrouter2
Request Failed: internal server error while processing your request.
Neutron server returns request_ids: ['req-a1da952c-e4f6-4b09-883d-12a894f6a8d1']


the exception on log is:

on.services.l3_router.service_providers.driver_controller.DriverController._set_router_provider
 router, precommit_create
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager Traceback (most 
recent call last):
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager   File 
"/mnt/data3/opt/stack/neutron/neutron/callbacks/manager.py", line 148, in 
_notify_loop
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager   File 
"/mnt/data3/opt/stack/neutron/neutron/services/l3_router/service_providers/driver_controller.py",
 line 81, in _set_router_provider
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager drv = 
self._get_provider_for_create(context, router)
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager   File 
"/mnt/data3/opt/stack/neutron/neutron/services/l3_router/service_providers/driver_controller.py",
 line 160, in _get_provider_for_create
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager return 
self._get_l3_driver_by_flavor(context, router['flavor_id'])
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager   File 
"/mnt/data3/opt/stack/neutron/neutron/services/l3_router/service_providers/driver_controller.py",
 line 164, in _get_l3_driver_by_flavor
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager flavor = 
self._flavor_plugin.get_flavor(context, flavor_id)
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager   File 
"/mnt/data3/opt/stack/neutron/neutron/services/l3_router/service_providers/driver_controller.py",
 line 68, in _flavor_plugin
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager 
constants.FLAVORS]
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager AttributeError: 
can't set attribute
2016-08-11 18:48:34.282 2901 ERROR neutron.callbacks.manager 
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
[req-a1da952c-e4f6-4b09-883d-12a894f6a8d1 e5fd88d4cebf44baa9547e45d17248cd 
3b9307233b4844c0850bd6625ab8f0e3 - - -] create failed: No details.
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/api/v2/base.py", line 397, in create
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/db/api.py", line 74, in wrapped
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
traceback.format_exc())
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   File 
"/mnt/data3/opt/stack/neutron/neutron/db/api.py", line 69, in wrapped
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-08-11 18:48:34.300 2901 ERROR neutron.api.v2.resource   

[Yahoo-eng-team] [Bug 1609178] Re: [api] Document GET /auth/catalog, GET /auth/projects, GET /auth/domains

2016-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/352689
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=b9c671718d515cd8147b994e01f194103f1f4f54
Submitter: Jenkins
Branch:master

commit b9c671718d515cd8147b994e01f194103f1f4f54
Author: liyingjun 
Date:   Tue Aug 9 11:28:38 2016 +0800

Document get auth/catalog,projects,domains

This patch adds GET /auth/catalog, GET /auth/projects and GET /auth/domains
to the API site.

Change-Id: Ifda4676680bb9759348bbf7f3353741c45308b8c
Closes-bug: #1609178


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1609178

Title:
  [api] Document GET /auth/catalog, GET /auth/projects, GET
  /auth/domains

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The following routes are missing from the API site, but are available
  in the specs repo:

  /auth/projects ->
  
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-available-project-scopes

  /auth/domains ->
  
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-available-domain-scopes

  /auth/catalog ->
  
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#get-service-catalog

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1609178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612183] [NEW] l3 router's driver controller is using wrong invalid exception

2016-08-11 Thread yong sheng gong
Public bug reported:

many places  at driver_controller are using wrong Invalid exception which has 
been moved to neutron_lib. For example:
https://github.com/openstack/neutron/blob/master/neutron/services/l3_router/service_providers/driver_controller.py#L113

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612183

Title:
  l3 router's driver controller is using wrong invalid exception

Status in neutron:
  In Progress

Bug description:
  many places  at driver_controller are using wrong Invalid exception which has 
been moved to neutron_lib. For example:
  
https://github.com/openstack/neutron/blob/master/neutron/services/l3_router/service_providers/driver_controller.py#L113

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612119] [NEW] Unrecognized attribute(s) 'qos_policy_id'

2016-08-11 Thread liuwei
Public bug reported:

centos7
openstack mitaka

[root@TEST ~(keystone_admin)]# neutron net-create --prefix 192.168.0.0/24  
network
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   |  |
| availability_zones|  |
| created_at| 2016-08-11T08:13:22  |
| description   |  |
| id| d20f2f83-059e-426d-8a93-fbbe95a3d53a |
| ipv4_address_scope|  |
| ipv6_address_scope|  |
| mtu   | 1450 |
| name  | network  |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 13   |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tags  |  |
| tenant_id | 985f84783f7c401288e11e0eb84a520e |
| updated_at| 2016-08-11T08:13:22  |
+---+--+
[root@TEST ~(keystone_admin)]# 
[root@TEST ~(keystone_admin)]# 
[root@TEST ~(keystone_admin)]# 
[root@TECS-27 ~(keystone_admin)]# neutron net-update --no-qos-policy network
Unrecognized attribute(s) 'qos_policy_id'
Neutron server returns request_ids: ['req-fb5211d6-a459-48d7-af0c-48deb3f21487']
[root@TEST ~(keystone_admin)]# 


/var/log/neutron/server.log

2016-08-11 16:14:48.558 7238 INFO neutron.api.v2.resource [req-
fb5211d6-a459-48d7-af0c-48deb3f21487 17770d50d8e643c6a570e11383770d5a
985f84783f7c401288e11e0eb84a520e - - -] update failed (client error):
Unrecognized attribute(s) 'qos_policy_id'

** Affects: neutron
 Importance: Undecided
 Assignee: liuwei (liu-wei81)
 Status: New


** Tags: neutron

** Project changed: packstack => neutron

** Changed in: neutron
 Assignee: (unassigned) => liuwei (liu-wei81)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612119

Title:
  Unrecognized attribute(s) 'qos_policy_id'

Status in neutron:
  New

Bug description:
  centos7
  openstack mitaka

  [root@TEST ~(keystone_admin)]# neutron net-create --prefix 192.168.0.0/24  
network
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2016-08-11T08:13:22  |
  | description   |  |
  | id| d20f2f83-059e-426d-8a93-fbbe95a3d53a |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1450 |
  | name  | network  |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 13   |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tags  |  |
  | tenant_id | 985f84783f7c401288e11e0eb84a520e |
  | updated_at| 2016-08-11T08:13:22  |
  +---+--+
  [root@TEST ~(keystone_admin)]# 
  [root@TEST ~(keystone_admin)]# 
  [root@TEST ~(keystone_admin)]# 
  [root@TECS-27 ~(keystone_admin)]# neutron net-update --no-qos-policy network
  Unrecognized attribute(s) 'qos_policy_id'
  Neutron 

[Yahoo-eng-team] [Bug 1612119] [NEW] Unrecognized attribute(s) 'qos_policy_id'

2016-08-11 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

centos7
openstack mitaka

[root@TEST ~(keystone_admin)]# neutron net-create --prefix 192.168.0.0/24  
network
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   |  |
| availability_zones|  |
| created_at| 2016-08-11T08:13:22  |
| description   |  |
| id| d20f2f83-059e-426d-8a93-fbbe95a3d53a |
| ipv4_address_scope|  |
| ipv6_address_scope|  |
| mtu   | 1450 |
| name  | network  |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 13   |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tags  |  |
| tenant_id | 985f84783f7c401288e11e0eb84a520e |
| updated_at| 2016-08-11T08:13:22  |
+---+--+
[root@TEST ~(keystone_admin)]# 
[root@TEST ~(keystone_admin)]# 
[root@TEST ~(keystone_admin)]# 
[root@TECS-27 ~(keystone_admin)]# neutron net-update --no-qos-policy network
Unrecognized attribute(s) 'qos_policy_id'
Neutron server returns request_ids: ['req-fb5211d6-a459-48d7-af0c-48deb3f21487']
[root@TEST ~(keystone_admin)]# 


/var/log/neutron/server.log

2016-08-11 16:14:48.558 7238 INFO neutron.api.v2.resource [req-
fb5211d6-a459-48d7-af0c-48deb3f21487 17770d50d8e643c6a570e11383770d5a
985f84783f7c401288e11e0eb84a520e - - -] update failed (client error):
Unrecognized attribute(s) 'qos_policy_id'

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron
-- 
Unrecognized attribute(s) 'qos_policy_id'
https://bugs.launchpad.net/bugs/1612119
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612094] [NEW] flavor metadata does not include extra specs

2016-08-11 Thread Mounika
Public bug reported:

I'm using stable mitaka openstack. 
Flavor metadata do not include extra specs of the flavor.

Steps to reproduce
1. Upload a hot template with flavor which has extra specs
2. launch stack 
3. Click on resources tab 
4. click on flavor

The extra specs which where provided while launching the stack are not
visible in resource (in this case flavor)metadata.

Expected result:
Extra specs which were provided during launch stack should be visible in 
resource metadata

Please find the screenshot attached below for reference.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Flavor_Details.PNG"
   
https://bugs.launchpad.net/bugs/1612094/+attachment/4718883/+files/Flavor_Details.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1612094

Title:
  flavor metadata does not include extra specs

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I'm using stable mitaka openstack. 
  Flavor metadata do not include extra specs of the flavor.

  Steps to reproduce
  1. Upload a hot template with flavor which has extra specs
  2. launch stack 
  3. Click on resources tab 
  4. click on flavor

  The extra specs which where provided while launching the stack are not
  visible in resource (in this case flavor)metadata.

  Expected result:
  Extra specs which were provided during launch stack should be visible in 
resource metadata

  Please find the screenshot attached below for reference.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1612094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611940] Re: vncserver_proxyclient_address changed from stropt to ipopt, breaking backwards compat without deprecation

2016-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/353710
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=9289e6212cf54a4ce74c7615cf74892c6a70c50d
Submitter: Jenkins
Branch:master

commit 9289e6212cf54a4ce74c7615cf74892c6a70c50d
Author: Sean Dague 
Date:   Wed Aug 10 16:00:53 2016 -0400

vnc host options need to support hostnames

When updating the config options the VNC options were switched from
StrOpt to IPOpt. However these are hostnames, they even say so in the
option name, so IPOpt is too restrictive, and could break folks in
upgrade if they set these to hostnames.

Change-Id: Ib2062407dcf9cba8676b0f38aa0c63df25cc7b38
Closes-Bug: #1611940


** Changed in: nova
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1611940

Title:
  vncserver_proxyclient_address changed from stropt to ipopt, breaking
  backwards compat without deprecation

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The change https://review.openstack.org/#/c/348442/ introduced a
  backwards incompatible change, specifically at:
  https://review.openstack.org/#/c/348442/3/nova/conf/vnc.py@68 where
  vncserver_proxyclient_address was changed from a StrOpt to an IpOpt.

  This broke backwards compatibility without a proper deprecation notice
  being introduced and there are especially no release notes that
  mention this.

  When running with this new commit, users that configured that parameter as a 
hostname are now greeted with a stack trace from nova-compute:
  2016-08-10 19:26:35.458 10624 CRITICAL nova 
[req-c235cb33-49c4-4f97-a4a1-0523f134afdc - - - - -] ConfigFileValueError: 
Value for option vncserver_proxyclient_address is not valid: n59.ci.centos.org 
is not IPv4 or IPv6 address
  2016-08-10 19:26:35.458 10624 ERROR nova Traceback (most recent call last):
  2016-08-10 19:26:35.458 10624 ERROR nova   File "/usr/bin/nova-compute", line 
10, in 
  2016-08-10 19:26:35.458 10624 ERROR nova sys.exit(main())
  2016-08-10 19:26:35.458 10624 ERROR nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/compute.py", line 78, in main
  2016-08-10 19:26:35.458 10624 ERROR nova service.wait()
  2016-08-10 19:26:35.458 10624 ERROR nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 415, in wait
  2016-08-10 19:26:35.458 10624 ERROR nova _launcher.wait()
  2016-08-10 19:26:35.458 10624 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 328, in wait
  2016-08-10 19:26:35.458 10624 ERROR nova status, signo = 
self._wait_for_exit_or_signal()
  2016-08-10 19:26:35.458 10624 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 303, in 
_wait_for_exit_or_signal
  2016-08-10 19:26:35.458 10624 ERROR nova self.conf.log_opt_values(LOG, 
logging.DEBUG)
  2016-08-10 19:26:35.458 10624 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2591, in 
log_opt_values
  2016-08-10 19:26:35.458 10624 ERROR nova _sanitize(opt, 
getattr(group_attr, opt_name)))
  2016-08-10 19:26:35.458 10624 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3022, in __getattr__
  2016-08-10 19:26:35.458 10624 ERROR nova return self._conf._get(name, 
self._group)
  2016-08-10 19:26:35.458 10624 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2633, in _get
  2016-08-10 19:26:35.458 10624 ERROR nova value = self._do_get(name, 
group, namespace)
  2016-08-10 19:26:35.458 10624 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2676, in _do_get
  2016-08-10 19:26:35.458 10624 ERROR nova % (opt.name, str(ve)))
  2016-08-10 19:26:35.458 10624 ERROR nova ConfigFileValueError: Value for 
option vncserver_proxyclient_address is not valid: n59.ci.centos.org is not 
IPv4 or IPv6 address
  2016-08-10 19:26:35.458 10624 ERROR nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1611940/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501238] Re: Unable to create a project at first time

2016-08-11 Thread Andy Yan
As I know, project data is stored in KEYSTONE. There is no database in
Horizon. Your bug should be in KEYSTONE  maybe ?

** Changed in: horizon
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501238

Title:
  Unable to create a project at first time

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  After installing all the components of openstack, I entered into the
  portal with admin credentials. Initially I required to create a
  project for a tenant, So I tried to create a project it throws an
  error "Danger Unable to create a project".

  Then I created a new user, after that I created a project, it is
  created. It is expecting that at-least one normal user should be
  found.  Due to this I want to experiment more, I deleted all the
  normal user and projects then I created a project, now It is created.
  working fine.

  Bug:

  Initially it expects at-least one user should be created before the
  project. But the workflow says that you should add at least one
  project before adding users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612069] [NEW] HA router state change takes too much time to notify neutron server

2016-08-11 Thread LIU Yulong
Public bug reported:

The ha state change BatchNotifier uses the following calculated
interval.

def _calculate_batch_duration(self):
# Slave becomes the master after not hearing from it 3 times
detection_time = self.conf.ha_vrrp_advert_int * 3

# Keepalived takes a couple of seconds to configure the VIPs
configuration_time = 2

# Give it enough slack to batch all events due to the same failure
return (detection_time + configuration_time) * 2

It takes almost 16s, by default ha_vrrp_advert_int is 2s, for a single HA 
router state change to notify neutron server.
Actually before this notify, the ip MonitorDaemon has already set the router to 
its relevant state.
So no need to wait this long time.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha

** Summary changed:

- HA router state change take too much time to notify neutron server
+ HA router state change takes too much time to notify neutron server

** Description changed:

  The ha state change BatchNotifier uses the following calculated
  interval.
  
  def _calculate_batch_duration(self):
  # Slave becomes the master after not hearing from it 3 times
  detection_time = self.conf.ha_vrrp_advert_int * 3
  
  # Keepalived takes a couple of seconds to configure the VIPs
  configuration_time = 2
  
  # Give it enough slack to batch all events due to the same failure
  return (detection_time + configuration_time) * 2
  
- It takes almost 16s for a single HA router state change to notify neutron 
server.
+ It takes almost 16s, by default ha_vrrp_advert_int is 2s, for a single HA 
router state change to notify neutron server.
  Actually before this notify, the ip MonitorDaemon has already set the router 
to its relevant state.
  So no need to wait this long time.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612069

Title:
  HA router state change takes too much time to notify neutron server

Status in neutron:
  New

Bug description:
  The ha state change BatchNotifier uses the following calculated
  interval.

  def _calculate_batch_duration(self):
  # Slave becomes the master after not hearing from it 3 times
  detection_time = self.conf.ha_vrrp_advert_int * 3

  # Keepalived takes a couple of seconds to configure the VIPs
  configuration_time = 2

  # Give it enough slack to batch all events due to the same failure
  return (detection_time + configuration_time) * 2

  It takes almost 16s, by default ha_vrrp_advert_int is 2s, for a single HA 
router state change to notify neutron server.
  Actually before this notify, the ip MonitorDaemon has already set the router 
to its relevant state.
  So no need to wait this long time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609625] Re: The 'create:forced_host' policy is set to 'rule:admin_or_owner' by default

2016-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/351077
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=16a38564cb61031466bf60ac393363bfeaedbd93
Submitter: Jenkins
Branch:master

commit 16a38564cb61031466bf60ac393363bfeaedbd93
Author: Takashi NATSUME 
Date:   Thu Aug 4 17:56:58 2016 +0900

Fix server operations' policies to admin only

Before the following policies were set to admin only operations
by default.

* detail:get_all_tenants
* index:get_all_tenants
* create:forced_host

But currently they are not limited to admin users by default.
They were changed unintentionally in
I71b3d1233255125cb280a000b990329f5b03fdfd.
So set them admin only again.
And a unit test for policy is fixed.

Change-Id: I1c0a4f1ff19d68152953dd6b265a7fb2e0f6271a
Closes-Bug: #1609625
Closes-Bug: #1609691
Closes-Bug: #1611628


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1609625

Title:
  The 'create:forced_host' policy is set to 'rule:admin_or_owner' by
  default

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The 'create:forced_host' policy is set to 'rule:admin_or_owner' by
  default currently (master: 5d040245e750aab06c620344828c2182703515b7).

  
https://github.com/openstack/nova/blob/5d040245e750aab06c620344828c2182703515b7/nova/policies/servers.py#L32

  But it was 'rule:admin_api' before.
  It was changed in the following patch.

  https://review.openstack.org/#/c/329122/

  It should be restored to a previous value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1609625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609691] Re: Non-admin users can lists VM instances of other projects (tenants) by default

2016-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/351077
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=16a38564cb61031466bf60ac393363bfeaedbd93
Submitter: Jenkins
Branch:master

commit 16a38564cb61031466bf60ac393363bfeaedbd93
Author: Takashi NATSUME 
Date:   Thu Aug 4 17:56:58 2016 +0900

Fix server operations' policies to admin only

Before the following policies were set to admin only operations
by default.

* detail:get_all_tenants
* index:get_all_tenants
* create:forced_host

But currently they are not limited to admin users by default.
They were changed unintentionally in
I71b3d1233255125cb280a000b990329f5b03fdfd.
So set them admin only again.
And a unit test for policy is fixed.

Change-Id: I1c0a4f1ff19d68152953dd6b265a7fb2e0f6271a
Closes-Bug: #1609625
Closes-Bug: #1609691
Closes-Bug: #1611628


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1609691

Title:
  Non-admin users can lists VM instances of other projects (tenants) by
  default

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Non-admin users can lists VM instances of other projects (tenants) by default.
  They should not be able to see VM instances of other projects by default.

  
  stack@devstack-master:/opt/devstack$ openstack project list
  +--++
  | ID   | Name   |
  +--++
  | 33621006e3744ecea0b7090658601929 | alt_demo   |
  | 6773c471c311455d862ed22f685574b0 | admin  |
  | 850f809b7ee5469f8aa639b4717f58a5 | demo   |
  | 95a64b7c097e4b69bd8af9224f332cd6 | invisible_to_admin |
  | c65ecc9a29e64e83bedf0609bb27266f | service|
  +--++
  stack@devstack-master:/opt/devstack$ openstack user list
  +--+--+
  | ID   | Name |
  +--+--+
  | 60066d4ac41a44d1ab6abea61809e78a | admin|
  | 896d17cb7d0f49f585ce460f61f35a5a | demo |
  | 6fcc02a6cfa64de097d15d2535d0108e | alt_demo |
  | b703f8d08aae46e0bad0fe3022d13250 | nova |
  | 205a38f88db84c13bb84274456da8b69 | glance   |
  | c2a64c7cffae430493dac9d8b4ef6470 | cinder   |
  | 5ad6f4ce7c64489e965d56eba081e2a9 | neutron  |
  | 2d16f7d5f324446dbfa30db2a04f9658 | heat |
  +--+--+
  stack@devstack-master:/opt/devstack$ openstack user role list --project admin 
admin
  +--+---+-+---+
  | ID   | Name  | Project | User  |
  +--+---+-+---+
  | 915b08cc7e6b40ceb55a803e8a843d0d | admin | admin   | admin |
  +--+---+-+---+
  stack@devstack-master:/opt/devstack$ openstack user role list --project demo 
demo
  +--+-+-+--+
  | ID   | Name| Project | User |
  +--+-+-+--+
  | cf49079e087a4c61935bac9a5c6c224d | Member  | demo| demo |
  | 664e30492b954257ae579e8498c4fc78 | anotherrole | demo| demo |
  +--+-+-+--+

  Operated by admin:
  stack@devstack-master:/opt/devstack$ nova show server1
  
+--++
  | Property | Value
  |
  
+--++
  (snipped...)
  | OS-EXT-STS:vm_state  | active   
  |
  (snipped...)
  | id   | 853d681b-de17-4fd3-bcd6-0f91d153ccd6 
  |
  (snipped...)
  | name | server1  
  |
  (snipped...)
  | tenant_id| 6773c471c311455d862ed22f685574b0 
  | * admin
  | updated  | 2016-08-04T08:09:49Z 
  |
  | user_id  | 60066d4ac41a44d1ab6abea61809e78a 
  | * admin
  
+--++

  Operated by demo:
 

[Yahoo-eng-team] [Bug 1611628] Re: test_admin_only_rules doesn't check an 'admin_or_owner' case correctly

2016-08-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/351077
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=16a38564cb61031466bf60ac393363bfeaedbd93
Submitter: Jenkins
Branch:master

commit 16a38564cb61031466bf60ac393363bfeaedbd93
Author: Takashi NATSUME 
Date:   Thu Aug 4 17:56:58 2016 +0900

Fix server operations' policies to admin only

Before the following policies were set to admin only operations
by default.

* detail:get_all_tenants
* index:get_all_tenants
* create:forced_host

But currently they are not limited to admin users by default.
They were changed unintentionally in
I71b3d1233255125cb280a000b990329f5b03fdfd.
So set them admin only again.
And a unit test for policy is fixed.

Change-Id: I1c0a4f1ff19d68152953dd6b265a7fb2e0f6271a
Closes-Bug: #1609625
Closes-Bug: #1609691
Closes-Bug: #1611628


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1611628

Title:
  test_admin_only_rules doesn't check an 'admin_or_owner' case correctly

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The test_admin_only_rules method of RealRolePolicyTestCase class in
  nova/tests/unit/test_policy.py doesn't check an 'admin_or_owner' case
  correctly.

  
  def test_admin_only_rules(self):
  for rule in self.admin_only_rules:
  self.assertRaises(exception.PolicyNotAuthorized, policy.authorize,
self.non_admin_context, rule, self.target)
  policy.authorize(self.admin_context, rule, self.target)
  
  
https://github.com/openstack/nova/blob/3d6e72689ee18a779d70405d11e09a69183cc853/nova/tests/unit/test_policy.py#L495

  If an admin only rule in source code is changed to 'admin_or_owner' rule by 
mistake,
  the assertRaises statement raises a PolicyNotAuthorized exception
  because it is not that the context is non admin user but the owner is 
defferent.
  So the target should be set to same project of non admin context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1611628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp