[Yahoo-eng-team] [Bug 1591048] [NEW] VM in self Service Networks aren't getting IP

2016-06-09 Thread Rajiv Sharma
Public bug reported:

Hello Team,

I have setup openstack mitaka distribution on RHEL 7 box. I setup One
Controller node and one compute node with Networking 2 option (Self
Service Networks). I can spin up VM in both subnets but VM in private
self-service network is not getting IP assigned where as VM in provider
networks are getting IP . Is this kind of bug in Mitaka version.

I had setup openstack-liberty also where VM's in self service networks
are getting IP's.

I found i am not the only one who coming across this issue.
http://stackoverflow.com/questions/37426821/why-the-vm-in-selfservice-
network-can-not-get-ip

Thanks,
Rajiv Sharma

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591048

Title:
  VM in self Service Networks aren't getting IP

Status in neutron:
  New

Bug description:
  Hello Team,

  I have setup openstack mitaka distribution on RHEL 7 box. I setup One
  Controller node and one compute node with Networking 2 option (Self
  Service Networks). I can spin up VM in both subnets but VM in private
  self-service network is not getting IP assigned where as VM in
  provider networks are getting IP . Is this kind of bug in Mitaka
  version.

  I had setup openstack-liberty also where VM's in self service networks
  are getting IP's.

  I found i am not the only one who coming across this issue.
  http://stackoverflow.com/questions/37426821/why-the-vm-in-selfservice-
  network-can-not-get-ip

  Thanks,
  Rajiv Sharma

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527671] Re: [RFE] traffic classification in Neutron QoS

2016-06-09 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527671

Title:
  [RFE] traffic classification in Neutron QoS

Status in neutron:
  Expired

Bug description:
  Introduce traffic classification into the Neutron QoS Rules.

  This will allow rules to target specific traffic and
  enhance the existing QoS API.

  Changes:
  * DB Model for extensions to existing rule types
  * API changes to allow for additional arguments to be specified to the 
extended QoS rules
  * Client changes to allow for additional values being set for extended 
qos_rules

  dependencies:
  * QoS API implementation #[qos_api_spec]_
  * QoS RPC and plugin integration
  * L2 agent extension for QoS (done in front of SRIOV / OvS / LB support)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475736] Re: [RFE] Add instrumentation to Neutron

2016-06-09 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475736

Title:
  [RFE] Add instrumentation to Neutron

Status in neutron:
  Expired

Bug description:
  In support of operators with more traditional network monitoring
  infrastructures, add the ability to collect statistics and status from
  Neutron.

  In the first phase, the goal is to be able to fill in the data
  structures specified in the following RFCs:

  2863 - Interfaces Group MIB [1]
  4293 - Management Information Base for the Internet Protocol [2]

  This stage focuses on collecting the information from components of the 
reference implementation, aggregating it so that it aligns with the neutron 
data model and presenting an interface that north bound systems can use to
  consume the aggregated data.
   
  Subsequent phases will be driven by operator feedback and requirements.

  See the etherpad [3] for more details.

  [1] https://tools.ietf.org/html/rfc2863
  [2] https://tools.ietf.org/html/rfc4293
  [3] https://etherpad.openstack.org/p/neutron-instrumentation

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590117] Re: Service plugin class' get_plugin_type should be a classmethod

2016-06-09 Thread YAMAMOTO Takashi
https://review.openstack.org/#/c/328051/

** Also affects: tap-as-a-service
   Importance: Undecided
   Status: New

** Changed in: tap-as-a-service
   Status: New => In Progress

** Changed in: tap-as-a-service
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590117

Title:
  Service plugin class' get_plugin_type should be a classmethod

Status in networking-midonet:
  In Progress
Status in neutron:
  Fix Released
Status in tap-as-a-service:
  In Progress

Bug description:
  There isn't any reason to have it as an instance method as its only
  returning a constant.

  $ git grep 'def get_plugin_type('
  neutron/extensions/metering.py:def get_plugin_type(self):
  neutron/extensions/qos.py:def get_plugin_type(self):
  neutron/extensions/segment.py:def get_plugin_type(self):
  neutron/extensions/tag.py:def get_plugin_type(self):
  neutron/services/auto_allocate/plugin.py:def get_plugin_type(self):
  neutron/services/flavors/flavors_plugin.py:def get_plugin_type(self):
  neutron/services/l3_router/l3_router_plugin.py:def get_plugin_type(self):
  neutron/services/network_ip_availability/plugin.py:def 
get_plugin_type(self):
  neutron/services/service_base.py:def get_plugin_type(self):
  neutron/services/timestamp/timestamp_plugin.py:def get_plugin_type(self):
  neutron/tests/functional/pecan_wsgi/utils.py:def get_plugin_type(self):
  neutron/tests/unit/api/test_extensions.py:def 
get_plugin_type(self):
  neutron/tests/unit/api/test_extensions.py:def 
get_plugin_type(self):
  neutron/tests/unit/dummy_plugin.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_flavors.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_l3.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_router_availability_zone.py:def 
get_plugin_type(self):
  neutron/tests/unit/extensions/test_segment.py:def get_plugin_type(self):

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1590117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591022] [NEW] Transient test failure in test_v3_auth.TestAuthTOTP

2016-06-09 Thread Dolph Mathews
Public bug reported:

In 0.06% of my test runs, test_v3_auth.TestAuthTOTP fails with either:

Traceback (most recent call last):
  File "/root/keystone/keystone/tests/unit/test_v3_auth.py", line 4904, in 
test_with_multiple_credential$
self.v3_create_token(auth_data, expected_status=http_client.CREATED)
  File "/root/keystone/keystone/tests/unit/test_v3.py", line 504, in 
v3_create_token
expected_status=expected_status)
  File "/root/keystone/keystone/tests/unit/rest.py", line 212, in admin_request
return self._request(app=self.admin_app, **kwargs)
  File "/root/keystone/keystone/tests/unit/rest.py", line 201, in _request
response = self.restful_request(**kwargs)
  File "/root/keystone/keystone/tests/unit/rest.py", line 186, in 
restful_request
**kwargs)
  File "/root/keystone/keystone/tests/unit/rest.py", line 90, in request
**kwargs)
  File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 571, in request
expect_errors=expect_errors,
  File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 636, in do_requ$
st
self._check_status(status, res)
  File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 671, in _check_$
tatus
"Bad response: %s (not %s)", res_status, status)
webtest.app.AppError: Bad response: 401 Unauthorized (not 201)

OR

Traceback (most recent call last):
  File "/root/keystone/keystone/tests/unit/test_v3_auth.py", line 4961, in 
test_with_username_and_domain_
id
self.v3_create_token(auth_data, expected_status=http_client.CREATED)
  File "/root/keystone/keystone/tests/unit/test_v3.py", line 504, in 
v3_create_token
expected_status=expected_status)
  File "/root/keystone/keystone/tests/unit/rest.py", line 212, in admin_request
return self._request(app=self.admin_app, **kwargs)
  File "/root/keystone/keystone/tests/unit/rest.py", line 201, in _request
response = self.restful_request(**kwargs)
  File "/root/keystone/keystone/tests/unit/rest.py", line 186, in 
restful_request
**kwargs)
  File "/root/keystone/keystone/tests/unit/rest.py", line 90, in request
**kwargs)
  File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 571, in request
expect_errors=expect_errors,
  File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 636, in do_reque
st
self._check_status(status, res)
  File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 671, in _check_s
tatus
"Bad response: %s (not %s)", res_status, status)
webtest.app.AppError: Bad response: 401 Unauthorized (not 201)

** Affects: keystone
 Importance: Critical
 Assignee: werner mendizabal (nonameentername)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => werner mendizabal (nonameentername)

** Changed in: keystone
   Importance: Undecided => Critical

** Changed in: keystone
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1591022

Title:
  Transient test failure in test_v3_auth.TestAuthTOTP

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  In 0.06% of my test runs, test_v3_auth.TestAuthTOTP fails with either:

  Traceback (most recent call last):
File "/root/keystone/keystone/tests/unit/test_v3_auth.py", line 4904, in 
test_with_multiple_credential$
  self.v3_create_token(auth_data, expected_status=http_client.CREATED)
File "/root/keystone/keystone/tests/unit/test_v3.py", line 504, in 
v3_create_token
  expected_status=expected_status)
File "/root/keystone/keystone/tests/unit/rest.py", line 212, in 
admin_request
  return self._request(app=self.admin_app, **kwargs)
File "/root/keystone/keystone/tests/unit/rest.py", line 201, in _request
  response = self.restful_request(**kwargs)
File "/root/keystone/keystone/tests/unit/rest.py", line 186, in 
restful_request
  **kwargs)
File "/root/keystone/keystone/tests/unit/rest.py", line 90, in request
  **kwargs)
File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 571, in request
  expect_errors=expect_errors,
File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 636, in do_requ$
  st
  self._check_status(status, res)
File 
"/root/keystone/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", 
line 671, in _check_$
  tatus
  "Bad response: %s (not %s)", res_status, status)
  webtest.app.AppError: Bad response: 401 Unauthorized (not 201)

  OR

  Traceback (most recent call last):
File "/root/keystone/keystone/tests/unit/test_v3_auth.py", line 4961, in 
test_with_username_and_domain_
  id
  self.v3_create_token(auth_data, 

[Yahoo-eng-team] [Bug 1591004] [NEW] Unable to download image with no checksum when cache is enabled

2016-06-09 Thread Sabari Murugesan
Public bug reported:

When cache middleware is enabled in the pipeline (default with devstack)
and you create an image using HTTP URL, the image cannot be downloaded
the second time.

Steps
1. glance image-create --name cirros_test --disk-format raw --container-format 
bare
2. glance location-add --url 
"http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-i386-disk.img; 
3. glance image-download f6c43e4f-0927-4484-baf4-681c96b950fc > /tmp/1
  [Success]
4. glance image-download f6c43e4f-0927-4484-baf4-681c96b950fc > /tmp/2
   Unable to download image 'f6c43e4f-0927-4484-baf4-681c96b950fc'. 
(HTTPInternalServerError (HTTP 500))

** Affects: glance
 Importance: Undecided
 Assignee: Sabari Murugesan (smurugesan)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Sabari Murugesan (smurugesan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1591004

Title:
  Unable to download image with no checksum when cache is enabled

Status in Glance:
  New

Bug description:
  When cache middleware is enabled in the pipeline (default with
  devstack) and you create an image using HTTP URL, the image cannot be
  downloaded the second time.

  Steps
  1. glance image-create --name cirros_test --disk-format raw 
--container-format bare
  2. glance location-add --url 
"http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-i386-disk.img; 
  3. glance image-download f6c43e4f-0927-4484-baf4-681c96b950fc > /tmp/1
[Success]
  4. glance image-download f6c43e4f-0927-4484-baf4-681c96b950fc > /tmp/2
 Unable to download image 'f6c43e4f-0927-4484-baf4-681c96b950fc'. 
(HTTPInternalServerError (HTTP 500))

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1591004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591001] [NEW] gate-tempest-dsvm-multinode-live-migration fails setting up ceph as ephemeral storage backend on ubuntu 16.04

2016-06-09 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/86/327886/2/experimental/gate-tempest-dsvm-
multinode-live-migration/a7c62d7/console.html#_2016-06-09_23_03_52_376

This is the failure:

2016-06-09 23:03:52.376 | 2016-06-09 23:03:52.369 | + sudo initctl emit 
ceph-mon id=ubuntu-xenial-2-node-rax-ord-1559232
2016-06-09 23:03:52.377 | 2016-06-09 23:03:52.370 | sudo: initctl: command not 
found
2016-06-09 23:03:52.378 | + return 1

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: ceph live-migration testing

** Tags removed: livve
** Tags added: ceph live-migration testing

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591001

Title:
  gate-tempest-dsvm-multinode-live-migration fails setting up ceph as
  ephemeral storage backend on ubuntu 16.04

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Seen here:

  http://logs.openstack.org/86/327886/2/experimental/gate-tempest-dsvm-
  multinode-live-migration/a7c62d7/console.html#_2016-06-09_23_03_52_376

  This is the failure:

  2016-06-09 23:03:52.376 | 2016-06-09 23:03:52.369 | + sudo initctl emit 
ceph-mon id=ubuntu-xenial-2-node-rax-ord-1559232
  2016-06-09 23:03:52.377 | 2016-06-09 23:03:52.370 | sudo: initctl: command 
not found
  2016-06-09 23:03:52.378 | + return 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590588] Re: pecan: list or get resource with single fields fails

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/327394
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8a6d22ccb53f14a26876a06152b7a3b47ae1a8e1
Submitter: Jenkins
Branch:master

commit 8a6d22ccb53f14a26876a06152b7a3b47ae1a8e1
Author: Brandon Logan 
Date:   Wed Jun 8 17:08:40 2016 -0500

Pecan: handle single fields query parameter

This also correctly handles the case where no fields are requested which 
seems
to have started to break.

Closes-Bug: #1590588
Change-Id: Ida1e3ff575c7fe6c3199c5f4393679bbf89c0fe1


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590588

Title:
  pecan: list or get resource with single fields fails

Status in neutron:
  Fix Released

Bug description:
  stacking fails with:

  $ neutron --os-cloud devstack-admin --os-region RegionOne subnet-create 
--tenant-id a213c00559414379b3f2848b01bc6544 --ip_version 4 --gateway 10.1.0.1 
--name private-subnet 2293ccce-9150-49f0-83b8-f85d2cccdf7c 10.1.0.0/20
  'id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543094] Re: [Pluggable IPAM] DB exceeded retry limit (RetryRequest) on create_router call

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/292207
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=dcb2a931b5b84fb7aa41f08b37a5148bf6e987bc
Submitter: Jenkins
Branch:master

commit dcb2a931b5b84fb7aa41f08b37a5148bf6e987bc
Author: Ryan Tidwell 
Date:   Fri Apr 8 14:21:03 2016 -0700

Compute IPAvailabilityRanges in memory during IP allocation

This patch computes IP availability in memory without locking on
IPAvailabilityRanges. IP availability is generated in memory, and
to avoid contention an IP address is selected by randomly
selecting from within the first 10 available IP addresses on a
subnet. Raises IPAddressGenerationFailure if unable to allocate an
IP address from within the window.

Change-Id: I52e4485e832cbe6798de6b4afb6a7cfd88db11e2
Depends-On: I84195b0eb63b7ca6a4e00becbe09e579ff8b718e
Closes-Bug: #1543094


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543094

Title:
  [Pluggable IPAM] DB exceeded retry limit (RetryRequest) on
  create_router call

Status in neutron:
  Fix Released

Bug description:
  Observed errors "DB exceeded retry limit" [1] in cases where pluggable ipam 
is enabled, observed on master branch.
  Each time retest is done different tests are failed, so looks like concurency 
issue.
  4  errors 'DB exceeded retry limit' are observed in [1].

  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api 
[req-7ad8b69e-a851-4b6c-8c2c-33258c53bb54 admin -] DB exceeded retry limit.
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api Traceback (most recent call 
last):
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in wrapper
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api return f(*args, **kwargs)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 519, in _create
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api obj = do_create(body)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 501, in do_create
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api request.context, 
reservation.reservation_id)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in 
__exit__
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api six.reraise(self.type_, 
self.value, self.tb)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 494, in do_create
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api return 
obj_creator(request.context, **kwargs)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_hamode_db.py", line 411, in create_router
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api 
self).create_router(context, router)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 200, in create_router
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api 
self.delete_router(context, router_db.id)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in 
__exit__
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api six.reraise(self.type_, 
self.value, self.tb)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 196, in create_router
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api gw_info, router=router_db)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_gwmode_db.py", line 69, in 
_update_router_gw_info
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api context, router_id, info, 
router=router)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 429, in 
_update_router_gw_info
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api ext_ips)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_dvr_db.py", line 185, in _create_gw_port
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api ext_ips)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 399, in _create_gw_port
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api new_network_id, ext_ips)
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 310, in 
_create_router_gw_port
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api context.elevated(), 
{'port': port_data})
  2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 

[Yahoo-eng-team] [Bug 1586584] Re: Get the virtual network topology

2016-06-09 Thread Assaf Muller
This can be done entirely in the client side. It would essentially be a
greppable, text based result similar to the current Horizon network
diagram. New features belong to the openstack client, not the neutron
client, so I fixed up the bug's target component.

** Project changed: neutron => python-openstackclient

** Changed in: python-openstackclient
   Status: Triaged => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586584

Title:
  Get the virtual network topology

Status in python-openstackclient:
  New

Bug description:
  When we create a virtual network and use the network in openstack, we just 
can get some simple information about this network by use neutron command " 
neutron net-show". However, getting more details is useful and necessary for us 
. For example , how many virtual devices such as vm, router, dhcp  which 
connected to the network. Further, we also want to know the tenants which those 
devices belong to and the nodes on which those devices are located. If we know 
all this information , we can generate a network topology about this network.
  Using the network topology, Understanding, planning and managing network 
become very easy. Also, fault diagnosis is more efficient. For example, wo can 
easily know the compute node on which  problematic port is located, not need to 
do a lot of inquiries in Nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1586584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566191] Re: [RFE] Allow multiple networks with FIP range to be associated with Tenant router

2016-06-09 Thread Carl Baldwin
** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566191

Title:
  [RFE] Allow multiple networks with FIP range to be associated with
  Tenant router

Status in neutron:
  Won't Fix

Bug description:
  This requirement came out during Manila-Neutron integration discussion to 
provide solution for multi-tenant environment to work with File Share store.
  The way to solve it is as following:
  A dedicated NAT based network connection should be established between a 
tenant's private network (where his VMs reside) and a data center local storage 
network. Sticking to IP based authorization, as used by Manila, the NAT 
assigned floating IPs in the storage network are used to check authorization in 
the storage backend, as well as to deal with possible overlapping IP ranges in 
the private networks of different tenants. A dedicated NAT and not the public 
FIP is suggested since public FIPs are usually limited resources.
  In order to be able to orchestrate the above use case, it should be possible 
to associate more than one subnet with 'FIP' range with the router (via router 
interface)  and enable NAT based on the destination subnet. 
  This behaviour was possible in Mitaka and worked for MidoNet plugin, but due 
to the https://bugs.launchpad.net/neutron/+bug/1556884 it won't be possible any 
more. 

  Related bug for security use case that can benefit from the proposed
  behavior is described here
  https://bugs.launchpad.net/neutron/+bug/1250105

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526974] Re: KeyError prevents openvswitch agent from starting

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/325370
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d2508163cfcede20df641239be081fe63c79150b
Submitter: Jenkins
Branch:master

commit d2508163cfcede20df641239be081fe63c79150b
Author: Brian Haley 
Date:   Fri Jun 3 11:34:35 2016 -0400

OVS: don't throw KeyError when duplicate VLAN tags exist

In _restore_local_vlan_map() we can have two ports with the
same VLAN tag, but trying to remove the second will throw
a KeyError, causing the agent to not start.  Use discard()
instead so we only remove an entry if it's there.

Closes-bug: #1526974
Change-Id: I479c693f490c704c5b6c1462e9ab236684e9c259


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526974

Title:
  KeyError prevents openvswitch agent from starting

Status in neutron:
  Fix Released

Bug description:
  On Liberty I ran into a situation where the openvswitch agent won't
  start and fails with the following stack trace:

  
  2015-12-16 16:01:42.852 10772 CRITICAL neutron 
[req-afb4e123-1940-48df-befc-9319516152b5 - - - - -] KeyError: 8
  2015-12-16 16:01:42.852 10772 ERROR neutron Traceback (most recent call last):
  2015-12-16 16:01:42.852 10772 ERROR neutron   File 
"/opt/neutron/bin/neutron-openvswitch-agent", line 11, in 
  2015-12-16 16:01:42.852 10772 ERROR neutron sys.exit(main())
  2015-12-16 16:01:42.852 10772 ERROR neutron   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py",
 line 20, in main
  2015-12-16 16:01:42.852 10772 ERROR neutron agent_main.main()
  2015-12-16 16:01:42.852 10772 ERROR neutron   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/main.py",
 line 49, in main
  2015-12-16 16:01:42.852 10772 ERROR neutron mod.main()
  2015-12-16 16:01:42.852 10772 ERROR neutron   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/main.py",
 line 36, in main
  2015-12-16 16:01:42.852 10772 ERROR neutron 
ovs_neutron_agent.main(bridge_classes)
  2015-12-16 16:01:42.852 10772 ERROR neutron   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1913, in main
  2015-12-16 16:01:42.852 10772 ERROR neutron agent = 
OVSNeutronAgent(bridge_classes, **agent_config)
  2015-12-16 16:01:42.852 10772 ERROR neutron   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 302, in __init__
  2015-12-16 16:01:42.852 10772 ERROR neutron self._restore_local_vlan_map()
  2015-12-16 16:01:42.852 10772 ERROR neutron   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 358, in _restore_local_vlan_map
  2015-12-16 16:01:42.852 10772 ERROR neutron 
self.available_local_vlans.remove(local_vlan)
  2015-12-16 16:01:42.852 10772 ERROR neutron KeyError: 8
  2015-12-16 16:01:42.852 10772 ERROR neutron

  
  Somehow the ovs table ended up with 2 ports with the same local vlan tag.

  # ovs-vsctl -- --columns=name,tag,other_config list Port | grep -E
  'qvob7ba561c-e5|qvod3e1f984-0e' -A 2

  name: "qvob7ba561c-e5"
  tag : 8
  other_config: {net_uuid="fb33e234-714d-44f8-8728-1a466ef5aca0", 
network_type=vxlan, physical_network=None, segmentation_id="5969"}
  --
  name: "qvod3e1f984-0e"
  tag : 8
  other_config: {net_uuid="47e0f11c-7aa4-4eb4-97dc-0ef4e064680c", 
network_type=vxlan, physical_network=None, segmentation_id="5836"}

  
  Additionally, I noticed the ofport for one of them was -1.

  # ovs-vsctl -- --columns=name,ofport,external_ids list Interface |
  grep -E 'qvob7ba561c-e5|qvod3e1f984-0e' -A 2

  name: "qvod3e1f984-0e"
  ofport  : 20
  external_ids: {attached-mac="fa:16:3e:d7:eb:05", 
iface-id="d3e1f984-0e4f-4d39-a074-1c0809ad864c", iface-status=active, 
vm-uuid="a00032c8-f516-42e3-865e-1988768bab84"}
  --
  name: "qvob7ba561c-e5"
  ofport  : -1
  external_ids: {attached-mac="fa:16:3e:a9:c3:69", 
iface-id="b7ba561c-e5a2-4128-b36c-9484a763f4de", iface-status=active, 
vm-uuid="71873533-a4ab-4af6-8ace-e75c60b828f9"}

  
  I'm not sure if this is relevant, but the VM that has -1 ofport is in the 
following state

  
+--+--+--+---++-+---+
  | ID   | Name 
| 

[Yahoo-eng-team] [Bug 1417027] Re: No disable reason defined for new services when "enable_new_services=False"

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/319461
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4eb95a1b897505fb4e4ec17541eeebb79e7217a8
Submitter: Jenkins
Branch:master

commit 4eb95a1b897505fb4e4ec17541eeebb79e7217a8
Author: Belmiro Moreira 
Date:   Fri May 20 23:14:03 2016 +0200

No disable reason defined for new services

Services can be disabled by several reasons and admins can use the API to
specify a reason. However, currently if new services are added and
"enable_new_services=False" there is no disable reason specified.
Having services disabled with no reason specified creates additional checks 
on
the operators side that increases with the deployment size.

This patch specifies the disable reason:
"New service disabled due to config option"
when a new service is added and "enable_new_services=False".

Closes-Bug: #1417027

Change-Id: I52dd763cf1b58ba3ff56fe97f37eca18c915681d


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417027

Title:
  No disable reason defined for new services when
  "enable_new_services=False"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When a service is added and "enable_new_services=False" there is no disable
  reason specified.
  Services can be disabled by several reasons and the admins can use the API to
  specify a reason. However, having services disabled with no reason specified 
  creates additional checks on the operators side that increases with the 
  deployment size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523646] Re: Nova/Cinder Key Manager for Barbican Uses Stale Cache

2016-06-09 Thread Nathan Kinder
This issue has been published as OSSN-0063 on the mailing lists and
wiki:

  https://wiki.openstack.org/wiki/OSSN/OSSN-0063

** Changed in: ossn
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523646

Title:
  Nova/Cinder Key Manager for Barbican Uses Stale Cache

Status in castellan:
  Fix Released
Status in Cinder:
  Fix Released
Status in Cinder liberty series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  The Key Manger for Barbican, implemented in Nova and Cinder, caches a value 
of barbican_client to save extra
  calls to Keystone for authentication.  However, the cached value of 
barbican_client is only valid for the current
  context.  A check needs to be made to ensure the context has not changed 
before using the saved value.

  The symptoms for using a stale cache value include getting the following 
error message when creating
  an encrypted volume.

  From CLI:
  ---
  openstack volume create --size 1 --type LUKS encrypted_volume
  The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-aea6be92-020e-41ed-ba88-44a1f5235ab0)

  
  In cinder.log
  ---
  2015-12-03 09:09:03.648 TRACE cinder.volume.api Traceback (most recent call 
last):
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", 
line 82, in _exe
  cute_task
  2015-12-03 09:09:03.648 TRACE cinder.volume.api result = 
task.execute(**arguments)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 409, in 
execute
  2015-12-03 09:09:03.648 TRACE cinder.volume.api source_volume)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 338, in 
_get_encryption_key_
  id
  2015-12-03 09:09:03.648 TRACE cinder.volume.api encryption_key_id = 
key_manager.create_key(context)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/keymgr/barbican.py", line 147, in create_key
  2015-12-03 09:09:03.648 TRACE cinder.volume.api LOG.exception(_LE("Error 
creating key."))
  ….
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 502, in post
  2015-12-03 09:09:03.648 TRACE cinder.volume.api return self.request(url, 
'POST', **kwargs)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 337, in inner
  2015-12-03 09:09:03.648 TRACE cinder.volume.api return func(*args, 
**kwargs)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 402, in 
request
  2015-12-03 09:09:03.648 TRACE cinder.volume.api raise 
exceptions.from_response(resp, method, url)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api Unauthorized: The request you 
have made requires authentication. (Disable debug mode to suppress these 
details.) (HTTP 401) (Request-ID: req-d2c52e0b-c16d-43ec-a7a0-763f1270)

To manage notifications about this bug go to:
https://bugs.launchpad.net/castellan/+bug/1523646/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590939] [NEW] Job dsvm-integration fails entirely due to the Firefox 47 had been released

2016-06-09 Thread Timur Sufiev
Public bug reported:

Obviously, existing WebDriver doesn't work with it, see for example
http://logs.openstack.org/25/322325/6/gate/gate-horizon-dsvm-
integration/d3d4dbc/console.html

Possible countermeasures are either using an older FF (quick hotfix), or
switching to the new Marionette webdriver (preferrable solution, see
https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette/WebDriver
).

** Affects: horizon
 Importance: Critical
 Assignee: Timur Sufiev (tsufiev-x)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Timur Sufiev (tsufiev-x)

** Changed in: horizon
Milestone: None => newton-2

** Changed in: horizon
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590939

Title:
  Job dsvm-integration fails entirely due to the Firefox 47 had been
  released

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Obviously, existing WebDriver doesn't work with it, see for example
  http://logs.openstack.org/25/322325/6/gate/gate-horizon-dsvm-
  integration/d3d4dbc/console.html

  Possible countermeasures are either using an older FF (quick hotfix),
  or switching to the new Marionette webdriver (preferrable solution,
  see https://developer.mozilla.org/en-
  US/docs/Mozilla/QA/Marionette/WebDriver ).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590929] [NEW] gate-tempest-dsvm-multinode-live-migration fails NFS setup on ubuntu 16.04 nodes

2016-06-09 Thread Matt Riedemann
Public bug reported:

As seen here:

http://logs.openstack.org/07/310707/12/experimental/gate-tempest-dsvm-
multinode-live-migration/3c6252e/console.html

Fails with this:

2016-06-09 18:44:10.845 | 2016-06-09 18:44:10.823 | localhost | FAILED! => {
2016-06-09 18:44:10.847 | 2016-06-09 18:44:10.825 | "changed": false, 
2016-06-09 18:44:10.848 | 2016-06-09 18:44:10.826 | "failed": true, 
2016-06-09 18:44:10.850 | 2016-06-09 18:44:10.828 | "msg": "Failed to start 
idmapd.service: Unit idmapd.service is masked.\n"
2016-06-09 18:44:10.852 | 2016-06-09 18:44:10.830 | }

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: live-migration nfs testing

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590929

Title:
  gate-tempest-dsvm-multinode-live-migration fails NFS setup on ubuntu
  16.04 nodes

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  As seen here:

  http://logs.openstack.org/07/310707/12/experimental/gate-tempest-dsvm-
  multinode-live-migration/3c6252e/console.html

  Fails with this:

  2016-06-09 18:44:10.845 | 2016-06-09 18:44:10.823 | localhost | FAILED! => {
  2016-06-09 18:44:10.847 | 2016-06-09 18:44:10.825 | "changed": false, 
  2016-06-09 18:44:10.848 | 2016-06-09 18:44:10.826 | "failed": true, 
  2016-06-09 18:44:10.850 | 2016-06-09 18:44:10.828 | "msg": "Failed to 
start idmapd.service: Unit idmapd.service is masked.\n"
  2016-06-09 18:44:10.852 | 2016-06-09 18:44:10.830 | }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570748] Re: Bug: resize instance after edit flavor with horizon

2016-06-09 Thread Corey Bryant
** Changed in: nova (Ubuntu Wily)
   Importance: Undecided => Medium

** Changed in: nova (Ubuntu Wily)
   Status: New => Triaged

** Changed in: nova (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: nova (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: cloud-archive/liberty
   Importance: Undecided => Medium

** Changed in: cloud-archive/liberty
   Status: New => Triaged

** Changed in: cloud-archive/mitaka
   Importance: Undecided => Medium

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

** Changed in: cloud-archive
   Importance: Undecided => Medium

** Changed in: cloud-archive
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1570748

Title:
  Bug: resize instance after edit flavor with horizon

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Triaged
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed
Status in nova-powervm:
  Fix Released
Status in tempest:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Wily:
  Triaged
Status in nova source package in Xenial:
  Triaged
Status in nova source package in Yakkety:
  Fix Released

Bug description:
  Error occured when resize instance after edit flavor with horizon (and
  also delete flavor used by instance)

  Reproduce step :

  1. create flavor A
  2. boot instance using flavor A
  3. edit flavor with horizon (or delete flavor A)
  -> the result is same to edit or to delelet flavor because edit flavor 
means delete/recreate flavor)
  4. resize or migrate instance
  5. Error occured

  Log : 
  nova-compute.log
 File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
   return getattr(target, method)(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
   result = fn(cls, context, *args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in 
get_by_id
   db_flavor = db.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get
   return IMPL.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in 
wrapper
   return f(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in 
flavor_get
   raise exception.FlavorNotFound(flavor_id=id)

   FlavorNotFound: Flavor 7 could not be found.

  
  This Error is occured because of below code:
  /opt/openstack/src/nova/nova/compute/manager.py

  def resize_instance(self, context, instance, image,
  reservations, migration, instance_type,
  clean_shutdown=True):
  
  if (not instance_type or
  not isinstance(instance_type, objects.Flavor)):
  instance_type = objects.Flavor.get_by_id(
  context, migration['new_instance_type_id'])
  

  I think that deleted flavor should be taken when resize instance. 
  I tested this in stable/kilo, but I think stable/liberty and stable/mitaka 
has same bug because source code is not changed.

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1570748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590908] Re: Enhancement: Add 'VNI' column in bgpvpns table in Neutron database.

2016-06-09 Thread Siddanagouda Khot
** Project changed: neutron => bgpvpn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590908

Title:
  Enhancement: Add 'VNI' column in bgpvpns table in Neutron database.

Status in bgpvpn:
  New

Bug description:
  Current bgpvpns table in Neutron maintains VPN information. Adding a
  new column 'VNI' to cater to EVPN/VxLAN requirements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1590908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590779] Re: Cache region invalidation works for local CacheRegion object only

2016-06-09 Thread Boris Bobrov
I am adding keystone because it has some logic for cache invalidation
across projects. Also, we ran into this issue originally on keystone.

The code on
https://github.com/openstack/keystone/blob/stable/mitaka/keystone/common/cache/core.py#L71
is supposed to proxy calls to cache invalidation. Unfortunately, lines
123 and 124 do not set setters and deleters to _hard_invalidated and
_soft_invalidated. This leads to dogpile.cache working like it was not
patched.

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590779

Title:
  Cache region invalidation works for local CacheRegion object only

Status in OpenStack Identity (keystone):
  New
Status in oslo.cache:
  In Progress

Bug description:
  oslo_cache uses dogpile.cache's CacheRegion
  which invalidates by setting region object attributes:
  - self._hard_invalidated
  - self._soft_invalidated
  Then it checks these attributes on value get.
  So this invalidation works for particular region object only.

  If there is a need to invalidate a region so that values in it are no
  more valid for other instances of CacheRegion (either in the same
  process or in another one) - it's simply impossible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1590779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590908] [NEW] Enhancement: Add 'VNI' column in bgpvpns table in Neutron database.

2016-06-09 Thread Siddanagouda Khot
Public bug reported:

Current bgpvpns table in Neutron maintains VPN information. Adding a new
column 'VNI' to cater to EVPN/VxLAN requirements.

** Affects: neutron
 Importance: Undecided
 Assignee: Siddanagouda Khot (siddanmk)
 Status: New


** Tags: bgpvpn vni vxlan

** Changed in: neutron
 Assignee: (unassigned) => Siddanagouda Khot (siddanmk)

** Tags added: bgpvpn vni vxlan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590908

Title:
  Enhancement: Add 'VNI' column in bgpvpns table in Neutron database.

Status in neutron:
  New

Bug description:
  Current bgpvpns table in Neutron maintains VPN information. Adding a
  new column 'VNI' to cater to EVPN/VxLAN requirements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590896] [NEW] py34 intermittent xenapi test failures

2016-06-09 Thread Andrew Laski
Public bug reported:

>From http://logs.openstack.org/24/322324/5/check/gate-nova-
python34-db/f5fadd6/console.html

2016-06-08 22:33:44.530 | 
nova.tests.unit.virt.xenapi.test_vmops.VMOpsTestCase.test_finish_revert_migration_after_crash_before_new
2016-06-08 22:33:44.530 | 

2016-06-08 22:33:44.530 | 
2016-06-08 22:33:44.531 | Captured traceback:
2016-06-08 22:33:44.531 | ~~~
2016-06-08 22:33:44.531 | b'Traceback (most recent call last):'
2016-06-08 22:33:44.531 | b'  File 
"/home/jenkins/workspace/gate-nova-python34-db/nova/tests/unit/virt/xenapi/test_vmops.py",
 line 126, in test_finish_revert_migration_after_crash_before_new'
2016-06-08 22:33:44.531 | b'
self._test_finish_revert_migration_after_crash(True, False)'
2016-06-08 22:33:44.531 | b'  File 
"/home/jenkins/workspace/gate-nova-python34-db/nova/tests/unit/virt/xenapi/test_vmops.py",
 line 97, in _test_finish_revert_migration_after_crash'
2016-06-08 22:33:44.531 | b"self.mox.StubOutWithMock(vm_utils, 
'lookup')"
2016-06-08 22:33:44.531 | b'  File 
"/home/jenkins/workspace/gate-nova-python34-db/.tox/py34/lib/python3.4/site-packages/mox3/mox.py",
 line 321, in StubOutWithMock'
2016-06-08 22:33:44.531 | b"raise TypeError('Cannot mock a 
MockAnything! Did you remember to '"
2016-06-08 22:33:44.531 | b'TypeError: Cannot mock a MockAnything! Did you 
remember to call UnsetStubs in your previous test?'
2016-06-08 22:33:44.531 | b''


This failure has happened to me a few times now on Jenkins jobs but does not 
reproduce locally. I suspect an ordering issue in the tests, especially since 
it's not always the same method that fails.

** Affects: nova
 Importance: Low
 Assignee: Andrew Laski (alaski)
 Status: In Progress

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590896

Title:
  py34 intermittent xenapi test failures

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  From http://logs.openstack.org/24/322324/5/check/gate-nova-
  python34-db/f5fadd6/console.html

  2016-06-08 22:33:44.530 | 
nova.tests.unit.virt.xenapi.test_vmops.VMOpsTestCase.test_finish_revert_migration_after_crash_before_new
  2016-06-08 22:33:44.530 | 

  2016-06-08 22:33:44.530 | 
  2016-06-08 22:33:44.531 | Captured traceback:
  2016-06-08 22:33:44.531 | ~~~
  2016-06-08 22:33:44.531 | b'Traceback (most recent call last):'
  2016-06-08 22:33:44.531 | b'  File 
"/home/jenkins/workspace/gate-nova-python34-db/nova/tests/unit/virt/xenapi/test_vmops.py",
 line 126, in test_finish_revert_migration_after_crash_before_new'
  2016-06-08 22:33:44.531 | b'
self._test_finish_revert_migration_after_crash(True, False)'
  2016-06-08 22:33:44.531 | b'  File 
"/home/jenkins/workspace/gate-nova-python34-db/nova/tests/unit/virt/xenapi/test_vmops.py",
 line 97, in _test_finish_revert_migration_after_crash'
  2016-06-08 22:33:44.531 | b"self.mox.StubOutWithMock(vm_utils, 
'lookup')"
  2016-06-08 22:33:44.531 | b'  File 
"/home/jenkins/workspace/gate-nova-python34-db/.tox/py34/lib/python3.4/site-packages/mox3/mox.py",
 line 321, in StubOutWithMock'
  2016-06-08 22:33:44.531 | b"raise TypeError('Cannot mock a 
MockAnything! Did you remember to '"
  2016-06-08 22:33:44.531 | b'TypeError: Cannot mock a MockAnything! Did 
you remember to call UnsetStubs in your previous test?'
  2016-06-08 22:33:44.531 | b''

  
  This failure has happened to me a few times now on Jenkins jobs but does not 
reproduce locally. I suspect an ordering issue in the tests, especially since 
it's not always the same method that fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590868] [NEW] valid sec group protocols contains bad text

2016-06-09 Thread Eric Peterson
Public bug reported:

When creating a security group rule, we have help text that says -1 is a
valid option.

We have a validator that does not allow -1.

The neutron client also does not support -1.

I believe we just need help text corrected.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590868

Title:
  valid sec group protocols contains bad text

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a security group rule, we have help text that says -1 is
  a valid option.

  We have a validator that does not allow -1.

  The neutron client also does not support -1.

  I believe we just need help text corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566455] Re: Using V3 Auth throws error TypeError at /auth/login/ __init__() got an unexpected keyword argument 'unscoped'

2016-06-09 Thread Brad Pokorny
The DOA patch has been abandoned, since stable/kilo will soon be EOL.

** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

** Changed in: django-openstack-auth
   Status: New => In Progress

** Changed in: django-openstack-auth
 Assignee: (unassigned) => Brad Pokorny (bpokorny)

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1566455

Title:
  Using V3 Auth throws error TypeError at /auth/login/ __init__() got an
  unexpected keyword argument 'unscoped'

Status in django-openstack-auth:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  PasswordPlugin class creates a plugin with 'unscoped' parameter. This
  is throwing the following error:

  TypeError at /auth/login/
  __init__() got an unexpected keyword argument 'unscoped'

  LOG.debug('Attempting to authenticate for %s', username)
  if utils.get_keystone_version() >= 3:
  return v3_auth.Password(auth_url=auth_url,
  username=username,
  password=password,
  user_domain_name=user_domain_name,
  unscoped=True) ---> 
Deleting this line removes the error and authenticates successfully.
  else:
  return v2_auth.Password(auth_url=auth_url,
  username=username,
  password=password) 

  I have V3 API and URL configured in Horizon settings. Using Horizon
  Kilo version.

  Is there some other setting that is needed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1566455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590845] [NEW] Router interfaces report being in BUILD state - l3ha vrrp+L2pop+LinuxBridge

2016-06-09 Thread Miguel Alejandro Cantu
Public bug reported:

I'm running a Liberty environment with two network hosts using the L3HA VRRP 
driver.
I also have L2pop on and am using the ML2 LinuxBridge driver.

When we programmatically attach subnets and/or ports to routers(we attach 1 
interface every 60 seconds), some report back stuck in the BUILD state. Take 
this interface, for example:
neutron port-show 98b55b89-a002-496f-a5d4-8de598613da8
+---+--+
| Field | Value 
   |
+---+--+
| admin_state_up| True  
   |
| allowed_address_pairs |   
   |
| binding:host_id   | dn3usoskctl03_neutron_agents_container-e64e37d6   
   |
| binding:profile   | {}
   |
| binding:vif_details   | {"port_filter": true} 
   |
| binding:vif_type  | bridge
   |
| binding:vnic_type | normal
   |
| device_id | 5838c5de-e87a-4e5e-b61f-a3f068fa7726  
   |
| device_owner  | network:router_interface  
   |
| dns_assignment| {"hostname": "host-10-169-160-1", "ip_address": 
"10.169.160.1", "fqdn": "host-10-169-160-1.openstacklocal."} |
| dns_name  |   
   |
| extra_dhcp_opts   |   
   |
| fixed_ips | {"subnet_id": "bc3a8d37-6cd7-4d57-b0c9-2b35743b0a0b", 
"ip_address": "10.169.160.1"}  |
| id| 98b55b89-a002-496f-a5d4-8de598613da8  
   |
| mac_address   | fa:16:3e:b9:7a:1d 
   |
| name  |   
   |
| network_id| 535c3336-202c-4dab-b517-2232c4ce1481  
   |
| security_groups   |   
   |
| status| BUILD 
   |
| tenant_id | 3ccf712795c44edcbc8ffcc331a59853  
   |
+---+--+

It's reporting itself in the BUILD state, but when I check the router
namespace, it's linux networking component counter part seems to be
functioning just fine:

8: qr-98b55b89-a0:  mtu 1500 qdisc pfifo_fast 
state UP group default qlen 1000
link/ether fa:16:3e:b9:7a:1d brd ff:ff:ff:ff:ff:ff
inet 10.169.160.1/23 scope global qr-98b55b89-a0
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feb9:7a1d/64 scope link 
   valid_lft forever preferred_lft forever

I can even ping the address with no problem once i open up the security
group rules.

Note: The problem doesn't appear when L3HA is turned off. Only when L3HA
with VRRP keepalived driver is being used.

Where would be a good place to start debugging this?

Thanks!

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590845

Title:
  Router interfaces report being in BUILD state - l3ha
  vrrp+L2pop+LinuxBridge

Status in neutron:
  New

Bug description:
  I'm running a Liberty environment with two network hosts using the L3HA VRRP 
driver.
  I also have L2pop on and am 

[Yahoo-eng-team] [Bug 1590816] [NEW] metadata agent make invalid token requests

2016-06-09 Thread Bjoern Teipel
Public bug reported:

Sporadically the neutron metadata agent seems to return 401 wrapped up in a 404.
For still unknown reasons, the metadata agents creates sporadically invalid v3 
token requests 

2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent
Unauthorized: {"error": {"message": "The resource could not be found.",
"code": 404, "title": "Not Found"}}

POST /tokens HTTP/1.1
Host: 1.2.3.4:35357
Content-Length: 91
Accept-Encoding: gzip, deflate
Accept: application/json
User-Agent: python-neutronclient

and the response is

HTTP/1.1 404 Not Found
Date: Tue, 01 Mar 2016 22:14:58 GMT
Server: Apache
Vary: X-Auth-Token
Content-Length: 93
Content-Type: application/json

and the agent will stop responding with a full stack. At first we thought this 
issue would be related to a improper auth_url configuration (see 
https://bugs.launchpad.net/openstack-ansible/liberty/+bug/1552394) but the 
issue came back.
Interestingly the agent start working once we restart it but the problem slowly 
appears once you start putting more workload on it (spinning up instances)


2016-02-26 13:34:46.478 33371 INFO eventlet.wsgi.server [-] (33371) accepted ''
2016-02-26 13:34:46.486 33371 ERROR neutron.agent.metadata.agent [-] Unexpected 
error.
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent Traceback 
(most recent call last):
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
109, in __call__
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent instance_id, 
tenant_id = self._get_instance_and_tenant_id(req)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
204, in _get_instance_and_tenant_id
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ports = 
self._get_ports(remote_address, network_id, router_id)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
197, in _get_ports
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_ports_for_remote_address(remote_address, networks)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/common/utils.py", line 101, in 
__call__
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_from_cache(target_self, *args, **kwargs)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/common/utils.py", line 79, in 
_get_from_cache
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent item = 
self.func(target_self, *args, **kwargs)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
166, in _get_ports_for_remote_address
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
ip_address=remote_address)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
135, in _get_ports_from_server
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_ports_using_client(filters)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
177, in _get_ports_using_client
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ports = 
client.list_ports(**filters)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
102, in with_params
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ret = 
self.function(instance, *args, **kwargs)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
534, in list_ports
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent **_params)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
307, in list
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent for r in 
self._pagination(collection, path, **params):
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
320, in _pagination
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent res = 
self.get(path, params=params)
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
293, in get
2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 

[Yahoo-eng-team] [Bug 1585373] Re: qos-policy update without specify --shared causing it change to default False

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/325644
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3c27beb8c0fe9b511bae78412b0623835a81f63f
Submitter: Jenkins
Branch:master

commit 3c27beb8c0fe9b511bae78412b0623835a81f63f
Author: Sławek Kapłoński 
Date:   Sun Jun 5 09:49:27 2016 +

Fix update of shared QoS policy

When user updates QoS policy which is globaly shared, it will be still
marked as globally shared even if this flag was not set explicitly
in update request.
For example, updating description of QoS policy will not change shared
flag to default value which is "False".

Co-Authored-By: Haim Daniel 

Change-Id: I2c59e71eae0bf2e73475bba321afc4aaa514b317
Closes-Bug: #1585373


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585373

Title:
  qos-policy update without specify --shared causing it change to
  default False

Status in neutron:
  Fix Released

Bug description:
  update policy 3k-bm-limiter as shared policy.
  update policy 3k-bm-limiter with only name field, causing default field 
shared=False being used.

  Here is the console log:

  nicira@newton-devstack:~$ neutron qos-policy-show 3k-bm-limiter
  +-+--+
  | Field   | Value|
  +-+--+
  | description | bw-limit 3k  |
  | id  | 163c5fc1-7bf2-455b-a92c-4118fc612822 |
  | name| 3k-bm-limiter|
  | rules   | 76344f3d-0933-4cd6-be97-918aebe4741c (type: bandwidth_limit) |
  | shared  | False|
  | tenant_id   | 1cf34eba3d3240a68966ef61567c5650 |
  +-+--+
  nicira@newton-devstack:~$ neutron qos-policy-update --shared 3k-bm-limiter
  Updated policy: 3k-bm-limiter
  nicira@newton-devstack:~$ neutron qos-policy-show 3k-bm-limiter
  +-+--+
  | Field   | Value|
  +-+--+
  | description | bw-limit 3k  |
  | id  | 163c5fc1-7bf2-455b-a92c-4118fc612822 |
  | name| 3k-bm-limiter|
  | rules   | 76344f3d-0933-4cd6-be97-918aebe4741c (type: bandwidth_limit) |
  | shared  | True |
  | tenant_id   | 1cf34eba3d3240a68966ef61567c5650 |
  +-+--+
  nicira@newton-devstack:~$ neutron qos-policy-update --name=bw-limiter 
3k-bm-limiter   
  Updated policy: 3k-bm-limiter
  nicira@newton-devstack:~$ neutron qos-policy-show bw-limiter
  +-+--+
  | Field   | Value|
  +-+--+
  | description | bw-limit 3k  |
  | id  | 163c5fc1-7bf2-455b-a92c-4118fc612822 |
  | name| bw-limiter   |
  | rules   | 76344f3d-0933-4cd6-be97-918aebe4741c (type: bandwidth_limit) |
  | shared  | False|
  | tenant_id   | 1cf34eba3d3240a68966ef61567c5650 |
  +-+--+
  nicira@newton-devstack:~$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590805] [NEW] Revoking "admin" role from a group invalidates user token

2016-06-09 Thread Niranjana Adiga
Public bug reported:

Steps to reproduce

1. Login as domain admin
2. Create a new group and grant "admin" role to it.
3. Group will be empty with no users added to it.(Domain admin won't be part of 
this group)
4. Now revoke "admin" role from this group.
5. Token for domain admin will be invalidated and he/she has to login again.

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

  Steps to reproduce
  
  1. Login as domain admin
  2. Create a new group and grant "admin" role to it.
- 3. Group will be empty with no users added to it.(Domain admin won't be part 
of this it)
+ 3. Group will be empty with no users added to it.(Domain admin won't be part 
of this group)
  4. Now revoke "admin" role from this group.
  5. Token for domain admin will be invalidated and he/she has to login again.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590805

Title:
  Revoking "admin" role from a group invalidates user token

Status in OpenStack Identity (keystone):
  New

Bug description:
  Steps to reproduce

  1. Login as domain admin
  2. Create a new group and grant "admin" role to it.
  3. Group will be empty with no users added to it.(Domain admin won't be part 
of this group)
  4. Now revoke "admin" role from this group.
  5. Token for domain admin will be invalidated and he/she has to login again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1590805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376211] Re: Retry mechanism does not work on startup when used with MySQL

2016-06-09 Thread Dmitry Mescheryakov
The fix was ported to Juno here -
https://review.openstack.org/#/c/126732/

** Changed in: oslo.db/juno
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376211

Title:
  Retry mechanism does not work on startup when used with MySQL

Status in neutron:
  Invalid
Status in oslo.db:
  Fix Released
Status in oslo.db juno series:
  Fix Released

Bug description:
  This is initially revealed as Red Hat bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1144181

  The problem shows up when Neutron or any other oslo.db based projects
  start while MySQL server is not up yet. Instead of retrying connection
  as per max_retries and retry_interval, service just crashes with
  return code 1.

  This is because during engine initialization, "engine.execute("SHOW
  VARIABLES LIKE 'sql_mode'")" is called, which opens the connection,
  *before* _test_connection() succeeds. So the server just bail out to
  sys.exit() at the top of the stack.

  This behaviour was checked for both oslo.db 0.4.0 and 1.0.1.

  I suspect this is a regression from the original db code from oslo-
  incubator though I haven't checked it specifically.

  The easiest way to reproduce the traceback is:

  1. stop MariaDB.
  2. execute the following Python script:

  '''
  import oslo.db.sqlalchemy.session

  url = 'mysql://neutron:123456@10.35.161.235/neutron'
  engine = oslo.db.sqlalchemy.session.EngineFacade(url)
  '''

  The following traceback can be seen in service log:

  2014-10-01 13:46:10.588 5812 TRACE neutron Traceback (most recent call last):
  2014-10-01 13:46:10.588 5812 TRACE neutron   File "/usr/bin/neutron-server", 
line 10, in 
  2014-10-01 13:46:10.588 5812 TRACE neutron sys.exit(main())
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/server/__init__.py", line 47, in main
  2014-10-01 13:46:10.588 5812 TRACE neutron neutron_api = 
service.serve_wsgi(service.NeutronApiService)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 105, in serve_wsgi
  2014-10-01 13:46:10.588 5812 TRACE neutron LOG.exception(_('Unrecoverable 
error: please check log '
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/excutils.py", line 
82, in __exit__
  2014-10-01 13:46:10.588 5812 TRACE neutron six.reraise(self.type_, 
self.value, self.tb)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 102, in serve_wsgi
  2014-10-01 13:46:10.588 5812 TRACE neutron service.start()
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 73, in start
  2014-10-01 13:46:10.588 5812 TRACE neutron self.wsgi_app = 
_run_wsgi(self.app_name)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/service.py", line 168, in _run_wsgi
  2014-10-01 13:46:10.588 5812 TRACE neutron app = 
config.load_paste_app(app_name)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/common/config.py", line 182, in 
load_paste_app
  2014-10-01 13:46:10.588 5812 TRACE neutron app = 
deploy.loadapp("config:%s" % config_path, name=app_name)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2014-10-01 13:46:10.588 5812 TRACE neutron return loadobj(APP, uri, 
name=name, **kw)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2014-10-01 13:46:10.588 5812 TRACE neutron return context.create()
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
  2014-10-01 13:46:10.588 5812 TRACE neutron return 
self.object_type.invoke(self)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
  2014-10-01 13:46:10.588 5812 TRACE neutron **context.local_conf)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 56, in fix_call
  2014-10-01 13:46:10.588 5812 TRACE neutron val = callable(*args, **kw)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/urlmap.py", line 25, in urlmap_factory
  2014-10-01 13:46:10.588 5812 TRACE neutron app = loader.get_app(app_name, 
global_conf=global_conf)
  2014-10-01 13:46:10.588 5812 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2014-10-01 13:46:10.588 5812 TRACE neutron name=name, 

[Yahoo-eng-team] [Bug 1588927] Re: /v3/groups?name= bypasses group_filter for LDAP

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/325939
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=1c0e59dc9c0cd8bb4fd54f26d01986a53bcd148c
Submitter: Jenkins
Branch:master

commit 1c0e59dc9c0cd8bb4fd54f26d01986a53bcd148c
Author: Matthew Edmonds 
Date:   Fri Jun 3 14:54:54 2016 -0400

Honor ldap_filter on filtered group list

Fix GET /v3/groups?name= to honor conf.ldap.group_filter.

The case where groups are listed for a specific user was already
honoring the filter, but the case where all groups are listed was not.
Moved the check into the get_all_filtered method that is shared by both
cases so that it is not duplicated.

Change-Id: I4a11394de2e6414ba936e01bcf2fcc523bab8ba5
Closes-Bug: #1588927


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1588927

Title:
  /v3/groups?name= bypasses group_filter for LDAP

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The same problem reported and fixed for users as
  https://bugs.launchpad.net/keystone/+bug/1577804 also exists for
  groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1588927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514056] Re: Restarting OVS agent drops VMs traffic when using VLAN provider bridges

2016-06-09 Thread Clayton O'Neill
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514056

Title:
  Restarting OVS agent drops VMs traffic when using VLAN provider
  bridges

Status in neutron:
  Fix Released

Bug description:
  (amuller) editing bug report based on comment 7:

  Dropping flows on the the physical bridges causes networking to drop.
  It's one of the two places that still causes networking to drop when
  OVS agent is restarted. The other is that the patch port between br-
  int and br-tun is being deleted and rebuilt during startup.

  Original bug description contained intent to set cookie on physical bridges 
for consistency purposes.
  In other words, with absence of cookies, flows can't be removed properly, so 
for kilo/liberty version that lead to stale flows remaining in OVS and 
disrupting the network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590746] [NEW] SRIOV PF/VF allocation fails with NUMA aware flavor

2016-06-09 Thread Ricardo Noriega
Public bug reported:

Description
===
It seems that the main failure happens due to the incorrect NUMA filtering in 
the pci allocation mechanism. The allocation is being done according to the 
instance NUMA topology, however, this is not always correct. Specifically in 
the case when a user selects hw:numa_nodes=1, which would mean that VM will 
take resources from just one numa node and not from a specific one.


Steps to reproduce
==

Create nova flavor with NUMA awareness, CPU pinning, Huge pages, etc:

#  nova flavor-create prefer_pin_1 auto 2048 20 1
#  nova flavor-key prefer_pin_1 set  hw:numa_nodes=1
#  nova flavor-key prefer_pin_1 set  hw:mem_page_size=1048576
#  nova flavor-key prefer_pin_1 set hw:numa_mempolicy=strict
#  nova flavor-key prefer_pin_1 set hw:cpu_policy=dedicated
#  nova flavor-key prefer_pin_1 set hw:cpu_thread_policy=prefer

Then instantiate VMs with direct-physical neutron ports:

neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf1
nova boot pf1 --flavor prefer_pin_1 --image centos_udev --nic 
port-id=a0fe88f6-07cc-4c70-b702-1915e36ed728
neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf2
nova boot pf2 --flavor prefer_pin_1 --image centos_udev --nic 
port-id=b96de3ec-ef94-428b-96bc-dc46623a2427

Third VM instantiation failed. Our environment has got 4 NICs configured
to be allocated. However, with a regular flavor (m1.normal), the
instantiation works:

neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf3
nova boot pf3 --flavor 2 --image centos_udev --nic 
port-id=52caacfe-0324-42bd-84ad-9a54d80e8fbe
neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf4
nova boot pf4 --flavor 2 --image centos_udev --nic 
port-id=7335a9a6-82d0-4595-bb88-754678db56ef


Expected result
===

PCI passthrough (PFs and VFs) should work in an environment with
NUMATopologyFilter enable


Actual result
=

Checking availability of NICs with NUMATopologyFilter is not working.


Environment
===

1 controller + 1 compute.

OpenStack Mitaka

Logs & Configs
==

See attachment

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "sosreport-nfv100.hi.inet-20160609134718.tar.xz"
   
https://bugs.launchpad.net/bugs/1590746/+attachment/4680374/+files/sosreport-nfv100.hi.inet-20160609134718.tar.xz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590746

Title:
  SRIOV PF/VF allocation fails with NUMA aware flavor

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  It seems that the main failure happens due to the incorrect NUMA filtering in 
the pci allocation mechanism. The allocation is being done according to the 
instance NUMA topology, however, this is not always correct. Specifically in 
the case when a user selects hw:numa_nodes=1, which would mean that VM will 
take resources from just one numa node and not from a specific one.

  
  Steps to reproduce
  ==

  Create nova flavor with NUMA awareness, CPU pinning, Huge pages, etc:

  #  nova flavor-create prefer_pin_1 auto 2048 20 1
  #  nova flavor-key prefer_pin_1 set  hw:numa_nodes=1
  #  nova flavor-key prefer_pin_1 set  hw:mem_page_size=1048576
  #  nova flavor-key prefer_pin_1 set hw:numa_mempolicy=strict
  #  nova flavor-key prefer_pin_1 set hw:cpu_policy=dedicated
  #  nova flavor-key prefer_pin_1 set hw:cpu_thread_policy=prefer

  Then instantiate VMs with direct-physical neutron ports:

  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf1
  nova boot pf1 --flavor prefer_pin_1 --image centos_udev --nic 
port-id=a0fe88f6-07cc-4c70-b702-1915e36ed728
  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf2
  nova boot pf2 --flavor prefer_pin_1 --image centos_udev --nic 
port-id=b96de3ec-ef94-428b-96bc-dc46623a2427

  Third VM instantiation failed. Our environment has got 4 NICs
  configured to be allocated. However, with a regular flavor
  (m1.normal), the instantiation works:

  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf3
  nova boot pf3 --flavor 2 --image centos_udev --nic 
port-id=52caacfe-0324-42bd-84ad-9a54d80e8fbe
  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf4
  nova boot pf4 --flavor 2 --image centos_udev --nic 
port-id=7335a9a6-82d0-4595-bb88-754678db56ef

  
  Expected result
  ===

  PCI passthrough (PFs and VFs) should work in an environment with
  NUMATopologyFilter enable

  
  Actual result
  =

  Checking availability of NICs with NUMATopologyFilter is not working.

  
  Environment
  ===

  1 controller + 1 compute.

  OpenStack Mitaka

  Logs & Configs
  ==

  See attachment

To manage notifications about this 

[Yahoo-eng-team] [Bug 1384108] Re: Exception during message handling: QueuePool limit of size 10 overflow 20 reached, connection timed out, timeout 10

2016-06-09 Thread James Page
Marking won't fix - I've not see this bug for a long time and this bug
has been quiet for over 12 months.

** Changed in: neutron (Ubuntu)
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384108

Title:
  Exception during message handling: QueuePool limit of size 10 overflow
  20 reached, connection timed out, timeout 10

Status in neutron:
  Confirmed
Status in neutron package in Ubuntu:
  Won't Fix

Bug description:
  OpenStack Juno release, Ubuntu 14.04 using Cloud Archive; under
  relatively high instance creation concurrency (150), neutron starts to
  throw some errors:

  2014-10-21 16:40:44.124 16312 ERROR oslo.messaging._drivers.common 
[req-8e3ebbdb-bc01-439d-af86-655176f206a6 ] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/securitygroups_rpc.py",
 line 74, in security_group_info_for_devices\nports = 
self._get_devices_info(devices_info)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/securitygroups_rpc.py",
 line 41, in _get_devices_info\nport = 
self.plugin.get_port_from_device(device)\n', '  File "/
 usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 1161, in 
get_port_from_device\nport = db.get_port_and_sgs(port_id)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/db.py", line 222, in 
get_port_and_sgs\nport_and_sgs = query.all()\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2300, in all\n 
   return list(self)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2412, in 
__iter__\nreturn self._execute_and_instances(context)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2425, in 
_execute_and_instances\nclose_with_result=True)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2416, in 
_connection_from_session\n**kw)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 854, in 
connection\nclose_with_result=close_with_result)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/ses
 sion.py", line 858, in _connection_for_bind\nreturn 
self.transaction._connection_for_bind(engine)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 322, in 
_connection_for_bind\nconn = bind.contextual_connect()\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1799, in 
contextual_connect\nself.pool.connect(),\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 338, in connect\n   
 return _ConnectionFairy._checkout(self)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 641, in _checkout\n 
   fairy = _ConnectionRecord.checkout(pool)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 440, in checkout\n  
  rec = pool._do_get()\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 957, in _do_get\n   
 (self.size(), self.overflow(), self._timeout))\n', 'TimeoutError: QueuePool 
limit of size 10 overflow 20 reached, connection timed out, tim
 eout 10\n']
  2014-10-21 16:40:44.126 16312 ERROR oslo.messaging.rpc.dispatcher 
[req-ea96dc85-dc0f-4ddc-a827-dbc25ab32a03 ] Exception during message handling: 
QueuePool limit of size 10 overflow 20 reached, connection timed out, timeout 10
  2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply
  2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch
  2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, 
in _do_dispatch
  2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-10-21 

[Yahoo-eng-team] [Bug 1584762] Re: Assignment with mock.Mock broke other tests

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/319916
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=be62910946bb109fdf3e2101eaacdf19f9f1b2e4
Submitter: Jenkins
Branch:master

commit be62910946bb109fdf3e2101eaacdf19f9f1b2e4
Author: Javeme 
Date:   Mon May 23 20:49:42 2016 +0800

Nova UTs broken due to modifying loopingcall global var

In unit tests, this practice that we just replace a method/class with
the Mock directly would break other tests, like the following:
loopingcall.FixedIntervalLoopingCall = mock.Mock()

We should use mock.patch() instead of the assignment with mock.Mock.

Change-Id: Id6f0ee53fc7ecf452fa8c015e26e4d59e716a10d
Closes-Bug: #1584762


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1584762

Title:
  Assignment with mock.Mock broke other tests

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In unit tests, this practice that we just replace a method/class with the 
Mock directly would break other tests [1], like the following:
  loopingcall.FixedIntervalLoopingCall = mock.Mock() [2]

  We should use mock.patch() instead.

  References:
  [1] 
http://logs.openstack.org/34/181634/28/check/gate-nova-python27-db/6ccd0b2/testr_results.html.gz
  [2] 
https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/libvirt/test_driver.py#L14676

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1584762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589821] Re: cleanup_incomplete_migrations periodic task regression with commit 099cf53925c0a0275325339f21932273ee9ce2bc

2016-06-09 Thread Tristan Cacqueray
Since this report concerns a possible security risk, an incomplete
security advisory task has been added while the core security reviewers
for the affected project or projects confirm the bug and discuss the
scope of any vulnerability along with potential solutions.

So IIUC, nova mitaka version(s) is affected by OSSA 2015-017. Does the
impact description still applies ?


Title: Nova may fail to delete images in resize state

Description:
If an authenticated user deletes an instance while it is in resize state, it 
will cause the original instance to not be deleted from the compute node it was 
running on. An attacker can use this to launch a denial of service attack. All 
Nova setups are affected.


This may need a new OSSA for this regression.

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589821

Title:
  cleanup_incomplete_migrations periodic task regression with commit
  099cf53925c0a0275325339f21932273ee9ce2bc

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  Patch [1] changes the instance filtering condition in periodic task
  "cleanup_incomplete_migrations" introduced in [2], in such a way that
  it generates new issue, [3]

  After change [1] lands,  the condition changes filtering logic, so now
  all instances on current host are filtered, which is not expected.

  We should filter all instances where instance uuids are associated
  with migration records and those migration status is set to 'error'
  and instance is marked as deleted.

  [1] https://review.openstack.org/#/c/256102/
  [2] https://review.openstack.org/#/c/219299/
  [2] https://bugs.launchpad.net/nova/+bug/1586309

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1589821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584199] Re: HyperV: Nova serial console access support

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/320477
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=74d6c42a1ff9b73ee4b28bf131b274607360fd49
Submitter: Jenkins
Branch:master

commit 74d6c42a1ff9b73ee4b28bf131b274607360fd49
Author: Yosef Hoffman 
Date:   Mon May 23 21:49:20 2016 -0400

Update Support Matrix

HyperV: Nova serial console access support [1] has been merged
successfully. Update support matrix accordingly.

[1] https://review.openstack.org/145004

Change-Id: Ie6792e91c5c6c24d4af448605e4bb7d245bf41a8
Closes-Bug: #1584199


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1584199

Title:
  HyperV: Nova serial console access support

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  https://review.openstack.org/145004
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit e215e6cba9922e98cb358891a3f9be2e809d770f
  Author: Lucian Petrut 
  Date:   Mon Jan 5 16:38:10 2015 +0200

  HyperV: Nova serial console access support
  
  Hyper-V provides a solid interface for accessing serial ports via
  named pipes, already employed in the Nova serial console log
  implementation.
  
  This patch makes use of this interface by implementing a simple TCP
  socket proxy, providing access to instance serial console ports.
  
  DocImpact
  
  Implements: blueprint hyperv-serial-ports
  
  Change-Id: I58c328391a80ee8b81f66b2e09a1bfa4b26e584c

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1584199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449062] Re: qemu-img calls need to be restricted by ulimit (CVE-2015-5162)

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/307663
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=068d851561addfefb2b812d91dc2011077cb6e1d
Submitter: Jenkins
Branch:master

commit 068d851561addfefb2b812d91dc2011077cb6e1d
Author: Daniel P. Berrange 
Date:   Mon Apr 18 16:32:19 2016 +

virt: set address space & CPU time limits when running qemu-img

This uses the new 'prlimit' parameter for oslo.concurrency execute
method, to set an address space limit of 1GB and CPU time limit
of 2 seconds, when running qemu-img.

This is a re-implementation of the previously reverted commit

commit da217205f53f9a38a573fb151898fbbeae41021d
Author: Tristan Cacqueray 
Date:   Wed Aug 5 17:17:04 2015 +

virt: Use preexec_fn to ulimit qemu-img info call

Closes-Bug: #1449062
Change-Id: I135b5242af1bfdcb0ea09a6fcda21fc03a6fbe7d


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449062

Title:
  qemu-img calls need to be restricted by ulimit (CVE-2015-5162)

Status in Cinder:
  New
Status in Glance:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Security Advisory:
  Confirmed

Bug description:
  Reported via private E-mail from Richard W.M. Jones.

  Turns out qemu image parser is not hardened against malicious input
  and can be abused to allocated an arbitrary amount of memory and/or
  dump a lot of information when used with "--output=json".

  The solution seems to be: limit qemu-img ressource using ulimit.

  Example of abuse:

  -- afl1.img --

  $ /usr/bin/time qemu-img info afl1.img
  image: afl1.img
  [...]
  0.13user 0.19system 0:00.36elapsed 92%CPU (0avgtext+0avgdata 
642416maxresident)k
  0inputs+0outputs (0major+156927minor)pagefaults 0swaps

  The original image is 516 bytes, but it causes qemu-img to allocate
  640 MB.

  -- afl2.img --

  $ qemu-img info --output=json afl2.img | wc -l
  589843

  This is a 200K image which causes qemu-img info to output half a
  million lines of JSON (14 MB of JSON).

  Glance runs the --output=json variant of the command.

  -- afl3.img --

  $ /usr/bin/time qemu-img info afl3.img
  image: afl3.img
  [...]
  0.09user 0.35system 0:00.47elapsed 94%CPU (0avgtext+0avgdata 
1262388maxresident)k
  0inputs+0outputs (0major+311994minor)pagefaults 0swaps

  qemu-img allocates 1.3 GB (actually, a bit more if you play with
  ulimit -v).  It appears that you could change it to allocate
  arbitrarily large amounts of RAM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1449062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590607] Re: incorrect handling of host numa cell usage with instances having no numa topology

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/327222
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f2706b230018ca718614cd86e8c6b68f8cbd7c3f
Submitter: Jenkins
Branch:master

commit f2706b230018ca718614cd86e8c6b68f8cbd7c3f
Author: Chris Friesen 
Date:   Wed Jun 8 18:15:34 2016 -0600

Fix resource tracking for instances with no numa topology

This fixes a problem in host NUMA node resource tracking when
there is an instance with no numa topology on the same node as
instances with numa topology.

It's triggered while running the resource audit, which ultimately
calls hardware.get_host_numa_usage_from_instance() and assigns
the result to self.compute_node.numa_topology.

The problem occurs if you have a number of instances with numa
topology, and then an instance with no numa topology. When running
numa_usage_from_instances() for the instance with no numa topology
we cache the values of "memory_usage" and "cpu_usage". However,
because instance.cells is empty we don't enter the loop. Since the
two lines in this commit are indented too far they don't get called,
and we end up appending a host cell with "cpu_usage" and
"memory_usage" of zero.   This results in a host numa_topology cell
with incorrect "cpu_usage" and "memory_usage" values, though I think
the overall host cpu/memory usage is still correct.

The fix is to reduce the indentation of the two lines in question
so that they get called even when the instance has no numa topology.
This writes the original host cell usage information back to it.

Change-Id: I7e327b79b731393ed787c4e131dc6d9654f424d0
Closes-Bug: #1590607


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590607

Title:
  incorrect handling of host numa cell usage with instances having no
  numa topology

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I think there is a problem in host NUMA node resource tracking when
  there is an instance with no numa topology on the same node as
  instances with numa topology.

  It's triggered while running the resource audit, which ultimately
  calls hardware.get_host_numa_usage_from_instance() and assigns the
  result to self.compute_node.numa_topology.

  The problem occurs if you have a number of instances with numa
  topology, and then an instance with no numa topology. When running
  numa_usage_from_instances() for the instance with no numa topology we
  cache the values of "memory_usage" and "cpu_usage". However, because
  "instances" is empty we don't enter the loop. Since the two lines in
  this commit are indented too far they don't get called, and we end up
  appending a host cell with "cpu_usage" and "memory_usage" of zero.
  This results in a host numa_topology cell with incorrect "cpu_usage"
  and "memory_usage" values, though I think the overall host cpu/memory
  usage is still correct.

  The fix is to reduce the indentation of the two lines in question so
  that they get called even when the instance has no numa topology. This
  writes the original host cell usage information back to it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590041] Re: DVR: regression with router rescheduling

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326574
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e4b82f7e645654ad43b378bd5f243e97a16112e6
Submitter: Jenkins
Branch:master

commit e4b82f7e645654ad43b378bd5f243e97a16112e6
Author: Oleg Bondarev 
Date:   Tue Jun 7 15:26:23 2016 +

Revert "DVR: Clear SNAT namespace when agent restarts after router move"

This reverts commit 9dc70ed77e055677a4bd3257a0e9e24239ed4cce.

Change-Id: I85a8051d56c535a4de4c70b3624eb7ccefa9e656
Closes-Bug: #1590041


** Changed in: neutron
   Status: In Progress => Fix Released

** Tags added: in-stable-liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590041

Title:
  DVR: regression with router rescheduling

Status in neutron:
  Fix Released

Bug description:
  L3 agent may not fully process dvr router being rescheduled to it which leads 
to loss of external connectivity.
  The reason is that with commit 9dc70ed77e055677a4bd3257a0e9e24239ed4cce dvr 
edge router now creates snat_namespace object in constructor while some logic 
in the module still checks for existence of this object: like 
external_gateway_updated() will not fully process router if snat_namespace 
object exists.

  The proposal is to revert commit
  9dc70ed77e055677a4bd3257a0e9e24239ed4cce and the make another attempt
  to fix bug 1557909.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590696] [NEW] neutron-lbaas: Devstack doesn't start agent properly

2016-06-09 Thread Dr. Jens Rosenboom
Public bug reported:

If I run devstack with

enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
ENABLED_SERVICES+=,q-lbaasv1

then the service gets configured, but not started. Using

ENABLED_SERVICES+=,q-lbaas

instead works fine. The reason is that neutron-lbaas/devstack/plugin.sh
does a

run_process q-lbaas ...

and within that function there is another check for "is_enabled q-lbaas"
which is false at that point in the first case.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590696

Title:
  neutron-lbaas: Devstack doesn't start agent properly

Status in neutron:
  New

Bug description:
  If I run devstack with

  enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
  ENABLED_SERVICES+=,q-lbaasv1

  then the service gets configured, but not started. Using

  ENABLED_SERVICES+=,q-lbaas

  instead works fine. The reason is that neutron-
  lbaas/devstack/plugin.sh does a

  run_process q-lbaas ...

  and within that function there is another check for "is_enabled
  q-lbaas" which is false at that point in the first case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590693] [NEW] libvirt's use of driver.get_instance_disk_info() is generally problematic

2016-06-09 Thread Matthew Booth
Public bug reported:

The nova.virt.driver 'interface' defines a get_instance_disk_info
method, which is called by compute manager to get disk info during live
migration to get the source hypervisor's internal representation of disk
info and pass it directly to the target hypervisor over rpc. To compute
manager this is an opaque blob of stuff which only the driver
understands, which is presumably why json was chosen. There are a couple
of problems with it.

This is a useful method within the libvirt driver, which uses it fairly
liberally. However, the method returns a json blob. Every use of it
internal to the libvirt driver first json encodes it in
get_instance_disk_info, then immediately decodes it again, which is
inefficient... except 2 uses of it in migrate_disk_and_power_off and
check_can_live_migrate_source, which don't decode it and assume it's a
dict. These are both broken, which presumably means something relating
to migration of volume-backed instances is broken. The libvirt driver
should not use this internally. We can have a wrapper method to do the
json encoding for compute manager, and internally use the unencoded data
data directly.

Secondly, we're passing an unversioned blob of data over rpc. We should
probably turn this data into a versioned object.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Description changed:

  The nova.virt.driver 'interface' defines a get_instance_disk_info
  method, which is called by compute manager to get disk info during live
  migration to get the source hypervisor's internal representation of disk
  info and pass it directly to the target hypervisor over rpc. To compute
  manager this is an opaque blob of stuff which only the driver
  understands, which is presumably why json was chosen. There are a couple
  of problems with it.
  
  This is a useful method within the libvirt driver, which uses it fairly
  liberally. However, the method returns a json blob. Every use of it
  internal to the libvirt driver first json encodes it in
  get_instance_disk_info, then immediately decodes it again, which is
- efficient. Except 2 uses of it in migrate_disk_and_power_off and
+ inefficient... except 2 uses of it in migrate_disk_and_power_off and
  check_can_live_migrate_source, which don't decode it and assume it's a
  dict. These are both broken, which presumably means something relating
  to migration of volume-backed instances is broken. The libvirt driver
  should not use this internally. We can have a wrapper method to do the
  json encoding for compute manager, and internally use the unencoded data
  data directly.
  
  Secondly, we're passing an unversioned blob of data over rpc. We should
  probably turn this data into a versioned object.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590693

Title:
  libvirt's use of driver.get_instance_disk_info() is generally
  problematic

Status in OpenStack Compute (nova):
  New

Bug description:
  The nova.virt.driver 'interface' defines a get_instance_disk_info
  method, which is called by compute manager to get disk info during
  live migration to get the source hypervisor's internal representation
  of disk info and pass it directly to the target hypervisor over rpc.
  To compute manager this is an opaque blob of stuff which only the
  driver understands, which is presumably why json was chosen. There are
  a couple of problems with it.

  This is a useful method within the libvirt driver, which uses it
  fairly liberally. However, the method returns a json blob. Every use
  of it internal to the libvirt driver first json encodes it in
  get_instance_disk_info, then immediately decodes it again, which is
  inefficient... except 2 uses of it in migrate_disk_and_power_off and
  check_can_live_migrate_source, which don't decode it and assume it's a
  dict. These are both broken, which presumably means something relating
  to migration of volume-backed instances is broken. The libvirt driver
  should not use this internally. We can have a wrapper method to do the
  json encoding for compute manager, and internally use the unencoded
  data data directly.

  Secondly, we're passing an unversioned blob of data over rpc. We
  should probably turn this data into a versioned object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552394] Re: auth_url contains wrong configuration for metadata_agent.ini and other neutron config

2016-06-09 Thread Miguel Angel Ajo
@boejern-teipel, The bug description doesn't seem to match anymore with
what you're describing in #18, could you open a separate bug for neutron
with the details?

Thank you.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552394

Title:
  auth_url contains wrong configuration for  metadata_agent.ini and
  other neutron config

Status in neutron:
  Invalid
Status in openstack-ansible:
  Fix Released
Status in openstack-ansible liberty series:
  In Progress
Status in openstack-ansible trunk series:
  Fix Released

Bug description:
  The current configuration

  auth_url = {{ keystone_service_adminuri }}

  will lead to a incomplete URL like  http://1.2.3.4:35357 and will
  cause the neutron-metadata-agent to make bad token requests like :

  POST /tokens HTTP/1.1
  Host: 1.2.3.4:35357
  Content-Length: 91
  Accept-Encoding: gzip, deflate
  Accept: application/json
  User-Agent: python-neutronclient

  and the response is

  HTTP/1.1 404 Not Found
  Date: Tue, 01 Mar 2016 22:14:58 GMT
  Server: Apache
  Vary: X-Auth-Token
  Content-Length: 93
  Content-Type: application/json

  and the agent will stop responding with

  2016-02-26 13:34:46.478 33371 INFO eventlet.wsgi.server [-] (33371) accepted 
''
  2016-02-26 13:34:46.486 33371 ERROR neutron.agent.metadata.agent [-] 
Unexpected error.
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent Traceback 
(most recent call last):
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
109, in __call__
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
instance_id, tenant_id = self._get_instance_and_tenant_id(req)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
204, in _get_instance_and_tenant_id
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ports = 
self._get_ports(remote_address, network_id, router_id)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
197, in _get_ports
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_ports_for_remote_address(remote_address, networks)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/common/utils.py", line 101, in 
__call__
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_from_cache(target_self, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/common/utils.py", line 79, in 
_get_from_cache
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent item = 
self.func(target_self, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
166, in _get_ports_for_remote_address
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
ip_address=remote_address)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
135, in _get_ports_from_server
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_ports_using_client(filters)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
177, in _get_ports_using_client
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ports = 
client.list_ports(**filters)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ret = 
self.function(instance, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
534, in list_ports
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
**_params)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
307, in list
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent for r in 
self._pagination(collection, path, **params):
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
320, in 

[Yahoo-eng-team] [Bug 1585816] Re: qos-bandwidth-limit-rule-create failed with internal server error

2016-06-09 Thread Miguel Angel Ajo
Liberty was tested upstream with pymysql and not the other driver.

Can you change your connection strings to pymysql and use this package:

http://mirror.centos.org/centos/7/cloud/x86_64/openstack-
liberty/common/python2-PyMySQL-0.6.7-2.el7.noarch.rpm

probably available with yum install python2-PyMySQL

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585816

Title:
  qos-bandwidth-limit-rule-create failed with  internal server error

Status in neutron:
  Invalid

Bug description:
  When using following command to create a bandwidth rule:
  # neutron qos-bandwidth-limit-rule-create --max-kbps 1000 --max-burst-kbps 
100 test-policy

  error returned:
  Request Failed: internal server error while processing your request.

  in /var/log/neutron/server.log, error message contains:
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters 
[req-ecefbd10-e988-43e1-a556-0f7b8a2b58a7 2eaf7ddac8b94a94ab40fad216341232 
e91adc92dfea433f9432857edb8af8cb - - -] DBAPIError exception wrapped from 
(_mysql_exceptions.ProgrammingError) (1064, "You have an error in your SQL 
syntax; check the manual that corresponds to your MySQL server version for the 
right syntax to use near ')' at line 3") [SQL: u'SELECT qos_policies.tenant_id 
AS qos_policies_tenant_id, qos_policies.id AS qos_policies_id, 
qos_policies.name AS qos_policies_name, qos_policies.description AS 
qos_policies_description, qos_policies.shared AS qos_policies_shared \nFROM 
qos_policies \nWHERE qos_policies.name = %s'] [parameters: ([u'test-policy'],)]
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in 
_execute_context
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters context)
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in 
do_execute
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters 
self.errorhandler(self, exc, value)
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters raise 
errorclass, errorvalue
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters 
ProgrammingError: (1064, "You have an error in your SQL syntax; check the 
manual that corresponds to your MySQL server version for the right syntax to 
use near ')' at line 3")
  2016-05-26 06:15:32.352 1878 ERROR oslo_db.sqlalchemy.exc_filters
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource 
[req-ecefbd10-e988-43e1-a556-0f7b8a2b58a7 2eaf7ddac8b94a94ab40fad216341232 
e91adc92dfea433f9432857edb8af8cb - - -] index failed
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 340, in index
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource return 
self._items(request, True, parent_id)
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 267, in _items
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource obj_list = 
obj_getter(request.context, **kwargs)
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_common.py", line 
49, in inner_filter
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource result = 
f(*args, **kwargs)
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_common.py", line 
35, in inner
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource result = 
f(*args, **kwargs)
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/services/qos/qos_plugin.py", line 84, 
in get_policies
  2016-05-26 06:15:32.353 1878 ERROR neutron.api.v2.resource

[Yahoo-eng-team] [Bug 1566113] Re: Create volume transfer name required error

2016-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/301432
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=5fb8b29d02437408e73d6188da7603a6ee0ef5e6
Submitter: Jenkins
Branch:master

commit 5fb8b29d02437408e73d6188da7603a6ee0ef5e6
Author: chenqiaomin 
Date:   Tue Apr 5 02:42:06 2016 +

Make the volume transfer name field required

In the 'Create Transfer Form', if the input name is blank, there will
cast error:"Error: Unable to create volume transfer", So the name field
should be required.

Change-Id: Ia749cfa5eb0bce4b778fa2099705fdba43dc0b0c
Closes-Bug: #1566113


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1566113

Title:
  Create volume transfer name required error

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In the 'Create Transfer Form', if the name input is blank, there will
  cast error:"Error: Unable to create volume transfer",So the name field
  should be required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1566113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571486] Re: tempest jobs timeout due to neutron tempest plugin

2016-06-09 Thread YAMAMOTO Takashi
** Changed in: networking-midonet
   Status: In Progress => Fix Released

** Changed in: networking-midonet
Milestone: None => 2.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571486

Title:
  tempest jobs timeout due to neutron tempest plugin

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released
Status in tempest:
  Invalid

Bug description:
  recently added neutron tempest plugin performs eventlet monkey patching.
  it affects other tempest tests.
  namely, all or most paramiko-using tests seem failing.

  examples:
  
http://logs.openstack.org/87/199387/25/check/gate-tempest-dsvm-networking-midonet-v1/28549d7/
  
http://logs.openstack.org/87/199387/25/check/gate-tempest-dsvm-networking-midonet-v2/d72e49c/
  
http://logs.openstack.org/87/199387/25/check/gate-tempest-dsvm-networking-midonet-ml2/4d9e8ff/

  it seems tap-as-a-service and neutron-fwaas jobs are also affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1571486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590649] [NEW] Incorrect prior state check in hz-table select box handler

2016-06-09 Thread Richard Jones
Public bug reported:

The documentation for the selection functionality for hz-table says to
configure the checkbox as:

 

The problem is that the row select checkbox click handler currently
performs this check (in tCtrl toggleSelect):

  if (angular.isDefined(ctrl.selections[key])) {
ctrl.selections[key].checked = checkedState;
  } else {
ctrl.selections[key] = { checked: checkedState, item: row };
  }

The problem is that the row will always exist in selections, so the
second branch will never fire, thus never setting the item.

This is reproducable in the /project/ngimages/ view with the batch
delete.

** Affects: horizon
 Importance: Undecided
 Status: Invalid

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1590649

Title:
  Incorrect prior state check in hz-table select box handler

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The documentation for the selection functionality for hz-table says to
  configure the checkbox as:

   

  The problem is that the row select checkbox click handler currently
  performs this check (in tCtrl toggleSelect):

if (angular.isDefined(ctrl.selections[key])) {
  ctrl.selections[key].checked = checkedState;
} else {
  ctrl.selections[key] = { checked: checkedState, item: row };
}

  The problem is that the row will always exist in selections, so the
  second branch will never fire, thus never setting the item.

  This is reproducable in the /project/ngimages/ view with the batch
  delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1590649/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp