[Yahoo-eng-team] [Bug 1611054] Re: mitaka release tags + tags-any causing internal server error

2016-10-07 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611054

Title:
  mitaka release tags + tags-any causing internal server error

Status in neutron:
  Expired

Bug description:
  Internal server error when query neutron network resources using --tags and 
tags-any combination.
  This is found at Mitaka release.

  stack@falcon-devstack ~
  $ ​net-list -c id -c name --tenant-id f900b2f993234d31b450ca58d0cae025 -c tags
  
+--+--+--+
  | id   | name | 
tags |
  
+--+--+--+
  | 64d91f98-9a39-44dc-809f-e82c8107a15d | tempest-test-network--1990720236 | 
[u'gold', u'production', u'ab', u'west'] |
  | 1ddfab2f-c568-40e1-b5a4-e57f8e3eac1c | tempest-test-network--1607343301 | 
[u'testing', u'silver', u'south']|
  | d48c067e-1404-45d1-9cb0-d528bd67ddce | tempest-test-network--631383401  | 
[u'testing', u'brown', u'south', u'a']   |
  | bdc417a7-7be2-4f3b-9853-4efeec71e1e5 | tempest-test-network--1353891402 | 
[u'production', u'east', u'gold']|
  | 95bb24f4-a883-4d3a-a2a9-24be4851646c | tempest-test-network--2110769294 | 
[u'north', u'production', u'gold']   |
  | cf52380e-8441-49bc-ba76-dd4bf3ea6888 | tempest-test-network--1898004147 | 
[u'north', u'brown', u'development', u'abc'] |
  | 75ce347c-30fd-4310-b997-0d73d0e3f4c0 | tempest-test-network--971498530  | 
[u'testing', u'east', u'silver'] |
  | c55d9b0d-fb35-4d57-8704-cfcb73f549e7 | tempest-test-network--1254291480 | 
[u'west', u'development', u'silver'] |
  
+--+--+--+
  stack@falcon-devstack ~
  $ ​T_ID=f900b2f993234d31b450ca58d0cae025
  stack@falcon-devstack ~
  $ ​net-list -c id -c name --tenant-id $T_ID -c tags
  
+--+--+--+
  | id   | name | 
tags |
  
+--+--+--+
  | 1ddfab2f-c568-40e1-b5a4-e57f8e3eac1c | tempest-test-network--1607343301 | 
[u'testing', u'silver', u'south']|
  | d48c067e-1404-45d1-9cb0-d528bd67ddce | tempest-test-network--631383401  | 
[u'testing', u'brown', u'south', u'a']   |
  | bdc417a7-7be2-4f3b-9853-4efeec71e1e5 | tempest-test-network--1353891402 | 
[u'production', u'east', u'gold']|
  | 95bb24f4-a883-4d3a-a2a9-24be4851646c | tempest-test-network--2110769294 | 
[u'north', u'production', u'gold']   |
  | cf52380e-8441-49bc-ba76-dd4bf3ea6888 | tempest-test-network--1898004147 | 
[u'north', u'brown', u'development', u'abc'] |
  | 75ce347c-30fd-4310-b997-0d73d0e3f4c0 | tempest-test-network--971498530  | 
[u'testing', u'east', u'silver'] |
  | 64d91f98-9a39-44dc-809f-e82c8107a15d | tempest-test-network--1990720236 | 
[u'production', u'ab', u'west', u'gold'] |
  | c55d9b0d-fb35-4d57-8704-cfcb73f549e7 | tempest-test-network--1254291480 | 
[u'west', u'development', u'silver'] |
  
+--+--+--+
  stack@falcon-devstack ~
  $ ​net-list -c id -c name --tenant-id $T_ID -c tags --tags production
  
+--+--+--+
  | id   | name | 
tags |
  
+--+--+--+
  | 64d91f98-9a39-44dc-809f-e82c8107a15d | tempest-test-network--1990720236 | 
[u'west', u'gold', u'production', u'ab'] |
  | bdc417a7-7be2-4f3b-9853-4efeec71e1e5 | tempest-test-network--1353891402 | 
[u'gold', u'production', u'east']|
  | 95bb24f4-a883-4d3a-a2a9-24be4851646c | tempest-test-network--2110769294 | 
[u'production', u'gold', u'north']   |
  
+--+--+--+
  stack@falcon-devstack ~
  $ ​net-list -c id -c name --tenant-id $T_ID -c tags --tags production 
--tags-any west --tags-any east
  Request Failed: internal server error while processing your request.
  Neutron ser

[Yahoo-eng-team] [Bug 1609725] Re: Creating meter-label with name that has more than 255 characters returns 500 error.

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/351488
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d59fa2b65e23b243806d82929d59c2bf79a91570
Submitter: Jenkins
Branch:master

commit d59fa2b65e23b243806d82929d59c2bf79a91570
Author: hobo.kengo 
Date:   Fri Aug 5 05:11:28 2016 +

Disallow specifying too long name for meter-label

Change-Id: Id916192f0ae38434de7d86790e056e48764b0916
Closes-Bug: #1609725


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609725

Title:
  Creating meter-label with name that has more than 255 characters
  returns 500 error.

Status in neutron:
  Fix Released

Bug description:
  Neutron tries to store name that has more than 255 characters.
  Length of name should be validated.

  Request
  =
  ubuntu@neutron-ml2:~$ curl -g -i -X POST 
http://172.16.1.29:9696/v2.0/metering/metering-labels -H "X-Auth-Token: $TOKEN" 
-d 
'{"metering_label":{"name":"aaab"}}'
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json
  Content-Length: 150
  X-Openstack-Request-Id: req-0d73d5b4-fabf-468f-874a-1458e39697a5
  Date: Thu, 04 Aug 2016 10:07:18 GMT

  {"NeutronError": {"message": "Request Failed: internal server error while 
processing your request.", "type": "HTTPInternalServerError", "detail": ""}}
  

  trace in neutron-server
  =
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters 
[req-0d73d5b4-fabf-468f-874a-1458e39697a5 b3ec23ec52144d7e96696abef028a5b0 
7dbb594bc59546f6b26ad73da2
  53c90a - - -] DBAPIError exception wrapped from (pymysql.err.DataError) 
(1406, u"Data too long for column 'name' at row 1") [SQL: u'INSERT INTO 
meteringlabels (project_
  id, id, name, description, shared) VALUES (%(project_id)s, %(id)s, %(name)s, 
%(description)s, %(shared)s)'] [parameters: {'shared': 0, 'description': '', 
'project_id':
  u'7dbb594bc59546f6b26ad73da253c90a', 'id': 
'07bf0f95-9950-418d-b667-b38084f4bad7', 'name': 
u'aaa
  

  b'}]
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_con
  text
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 161, in 
execute
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters result 
= self._query(query)
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 317, in _query
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 837, in 
query
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1021, in 
_read_query_res
  ult
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1304, in 
read
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 983, in 
_read_packet
  2016-08-04 10:07:18.591 21756 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
  2016-08-04 10:07:18.591 21756 

[Yahoo-eng-team] [Bug 1630439] Re: linuxbridge-agent fails to start on python3.4

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/382704
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=eb1efc7aceb7a1fe509edcab5cea60dc0668bc87
Submitter: Jenkins
Branch:master

commit eb1efc7aceb7a1fe509edcab5cea60dc0668bc87
Author: Henry Gessau 
Date:   Wed Oct 5 22:33:20 2016 -0400

Account for Py2/Py3 differences in fcntl.ioctl return value

Closes-Bug: #1630439

Change-Id: Icc7bc9372d87dfd6cc15a2b472e38250479ac4ec


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630439

Title:
  linuxbridge-agent fails to start on python3.4

Status in neutron:
  Fix Released

Bug description:
  I'll attach a log with the failure, but to my eyes they seem like
  py2to3 errors (things missed or something)

  starts fine in python2.7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624730] Re: i18n: unnecessary translate tag in hz-detail-row.html

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/371980
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a2de34163a351a56680b086303fa23201db2bbb2
Submitter: Jenkins
Branch:master

commit a2de34163a351a56680b086303fa23201db2bbb2
Author: Akihiro Motoki 
Date:   Sat Sep 17 20:44:41 2016 +

Remove unnecessary translate mark

 and  tags in hz-detail-row.html contains no translatable strings.
They are unnecessary.

Change-Id: Ic0a687fb3266b02de7dab3e4079c391a45cf95d7
Closes-Bug: #1624730


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624730

Title:
  i18n: unnecessary translate tag in hz-detail-row.html

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In horizon/static/framework/widgets/table/hz-detail-row.html,  and
   tags are marked as 'translate', but they contains no translatable
  strings. 'translate' is unnecessary.

  By removing them, it avoid unnecessary confusion for translators.
  Translators usually do not understand actual codes and they wonder how
  they can do for them.

   
 
   
 {$ column.title $}
 
   
 
   

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631551] [NEW] Ether type should default to IPv4 if enable_ipv6 is set to False

2016-10-07 Thread Daniel Park
Public bug reported:

The "ethertype" field in the security group form should default to IPv4
if OPENSTACK_NEUTRON_NETWORK.enable_ipv6 is set to False

** Affects: horizon
 Importance: Undecided
 Assignee: Daniel Park (daniepar)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Daniel Park (daniepar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1631551

Title:
  Ether type should default to IPv4 if enable_ipv6 is set to False

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The "ethertype" field in the security group form should default to
  IPv4 if OPENSTACK_NEUTRON_NETWORK.enable_ipv6 is set to False

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1631551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623871] Re: Nova hugepage support does not include aarch64

2016-10-07 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 2:14.0.0-0ubuntu1

---
nova (2:14.0.0-0ubuntu1) yakkety; urgency=medium

  * New upstream release for OpenStack Newton.
  * d/t/nova-compute-daemons: Skip test execution if running within a
container, ensuring that autopkgtests don't fail on armhf and s390x.
  * d/t/control,nova-compute-daemons: Don't install nova-compute as part
of the autopkgtest control setup, direct install hypervisor specific
nova-compute packages ensuring packages are configured in the correct
order and that nova-compute can access the libvirt socket.

 -- James Page   Fri, 07 Oct 2016 08:48:28 +0100

** Changed in: nova (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623871

Title:
  Nova hugepage support does not include aarch64

Status in OpenStack Compute (nova):
  In Progress
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  In Progress

Bug description:
  [Impact]
  Although aarch64 supports spawning a vm with hugepages, in nova code, the 
libvirt driver considers only x86_64 and I686. Both for NUMA and Hugepage 
support, AARCH64 needs to be added. Due to this bug, vm can not be launched 
with hugepage using OpenStack on aarch64 servers.

  Note: this depends on the fix for LP: #1627926.

  [Test Case]
  Steps to reproduce:
  On an openstack environment running on aarch64:
  1. Configure compute to use hugepages.
  2. Set mem_page_size="2048" for a flavor
  3. Launch a VM using the above flavor.

  Expected result:
  VM should be launched with hugepages and the libvirt xml should have

    
    
  
    
    

  Actual result:
  VM is launched without hugepages.

  There are no error logs in nova-scheduler.

  [Regression Risk]
  Risk is minimized by the fact that this change is just enabling the same code 
for arm64 that is already enabled for Ubuntu/x86.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625653] Re: wsgi-intercept in requirements.txt?

2016-10-07 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 2:14.0.0-0ubuntu1

---
nova (2:14.0.0-0ubuntu1) yakkety; urgency=medium

  * New upstream release for OpenStack Newton.
  * d/t/nova-compute-daemons: Skip test execution if running within a
container, ensuring that autopkgtests don't fail on armhf and s390x.
  * d/t/control,nova-compute-daemons: Don't install nova-compute as part
of the autopkgtest control setup, direct install hypervisor specific
nova-compute packages ensuring packages are configured in the correct
order and that nova-compute can access the libvirt socket.

 -- James Page   Fri, 07 Oct 2016 08:48:28 +0100

** Changed in: nova (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1625653

Title:
  wsgi-intercept in requirements.txt?

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  stable/newton (included in rc1)

  The following commit:

  
https://github.com/openstack/nova/commit/b922af9ee839543b732a69a4cff946f748436c3c

  add wsgi-intercept to requirements.txt - this seems more appropriate
  for test-requirements.txt as its only used in functional testing
  AFAICT.

  This causes some of the automated packaging tooling in Ubuntu/Debian
  to generate a runtime dependency on python-wsgi-intercept, which I
  don't think is actually required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1625653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630420] Re: config_drive unit tests (libvirt driver) aren't mocking genisoimage

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/383524
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b90df7c9c7042d2a8f104f66220f81ecb2951597
Submitter: Jenkins
Branch:master

commit b90df7c9c7042d2a8f104f66220f81ecb2951597
Author: Diana Clarke 
Date:   Thu Oct 6 21:52:03 2016 -0400

Patch mkisofs calls

The nova unit tests recently started to fail on systems lacking mkisofs
(like mac osx). Skip these mkisofs calls by patching _make_iso9660.

Change-Id: I350aafa878616f74df506c1bc9ee5f26ea06fe97
Closes-Bug: #1630420


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630420

Title:
  config_drive unit tests (libvirt driver) aren't mocking genisoimage

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I was running unit tests on a bare bones vm that didn't have
  genisoimage installed and the test_rescue_config_drive test failed.

  ==
  Failed 1 tests - output below:
  ==

  
nova.tests.unit.virt.libvirt.test_driver.LibvirtDriverTestCase.test_rescue_config_drive
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/virt/libvirt/test_driver.py", line 16420, in 
test_rescue_config_drive
  instance, exists=lambda name: name != 'disk.config.rescue')
File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1305, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/virt/libvirt/test_driver.py", line 16374, in 
_test_rescue
  network_info, image_meta, rescue_password)
File "nova/virt/libvirt/driver.py", line 2531, in rescue
  self._create_domain(xml, post_xml_callback=gen_confdrive)
File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1062, in __call__
  return _mock_self._mock_call(*args, **kwargs)
File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1128, in _mock_call
  ret_val = effect(*args, **kwargs)
File "nova/tests/unit/virt/libvirt/test_driver.py", line 16368, in 
fake_create_domain
  post_xml_callback()
File "nova/virt/libvirt/driver.py", line 3130, in _create_configdrive
  cdb.make_drive(config_drive_local_path)
File "nova/virt/configdrive.py", line 143, in make_drive
  self._make_iso9660(path, tmpdir)
File "nova/virt/configdrive.py", line 97, in _make_iso9660
  run_as_root=False)
File "nova/utils.py", line 296, in execute
  return processutils.execute(*cmd, **kwargs)
File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_concurrency/processutils.py",
 line 363, in execute
  env=env_variables)
File 
"/home/auggy/nova/.tox/py27/local/lib/python2.7/site-packages/eventlet/green/subprocess.py",
 line 54, in __init__
  subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds)
File "/usr/lib/python2.7/subprocess.py", line 711, in __init__
  errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child
  raise child_exception
  OSError: [Errno 2] No such file or directory

  
  When I installed genisoimage, the test passed.

  genisoimage is the default value for mkisofs_cmd (configurable). It's
  called in the _make_iso9660 method for creating an image. Besides the
  issue of shelling out to a process going beyond the scope of what a
  unit test should cover, this also creates a hard dependency on
  genisoimage.

  Other areas in the code mock the call to genisoimage. This test should do 
something similar.
  
https://github.com/openstack/nova/blob/master/nova/tests/unit/test_configdrive2.py#L49

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630420/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631523] [NEW] The title of subnet tab in create subnet modal is the code name

2016-10-07 Thread Ying Zuo
Public bug reported:

Steps to reproduce:
1. Create a network
2. Click the name of the network
3. Go to the subnet tab
4. Click create subnet

Note that the title of the subnet tab is shown as
"CreateSubnetInfoAction" but it should be "Subnet".

** Affects: horizon
 Importance: Undecided
 Assignee: Ying Zuo (yingzuo)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Ying Zuo (yingzuo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1631523

Title:
  The title of subnet tab in create subnet modal is the code name

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Steps to reproduce:
  1. Create a network
  2. Click the name of the network
  3. Go to the subnet tab
  4. Click create subnet

  Note that the title of the subnet tab is shown as
  "CreateSubnetInfoAction" but it should be "Subnet".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1631523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631513] [NEW] DVR: Fix race conditions when trying to add default gateway for fip gateway port.

2016-10-07 Thread Swaminathan Vasudevan
Public bug reported:

There seems to be a race condition when trying to add default gateway
route in fip namespace for the fip agent gateway port.

The way it happens is at high scale testing, when there is a router
update that is currently happening for the Router-A which has a
floatingip, a fip namespace is getting created and gateway ports plugged
to the external bridge in the context of the fip namespace. While it is
getting created, if there is another router update for the same
Router-A, then it calls 'update-gateway-port' and tries to set the
default gateway and fails.

We do find a log message in the l3-agent with  'Failed to process compatible 
router' and also a TRACE in the l3-agent.
Traceback (most recent call last):
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/l3/agent.py",
 line 501, in _process_router_update
 self._process_router_if_compatible(router)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/l3/agent.py",
 line 440, in _process_router_if_compatible
 self._process_updated_router(router)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/l3/agent.py",
 line 454, in _process_updated_router
 ri.process(self)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/l3/dvr_local_router.py",
 line 538, in process
 super(DvrLocalRouter, self).process(agent)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/l3/dvr_router_base.py",
 line 31, in process
 super(DvrRouterBase, self).process(agent)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/common/utils.py",
 line 396, in call
 self.logger(e)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
 self.force_reraise()
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
 six.reraise(self.type_, self.value, self.tb)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/common/utils.py",
 line 393, in call
 return func(*args, **kwargs)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/l3/router_info.py",
 line 989, in process
 self.process_external(agent)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/l3/dvr_local_router.py",
 line 491, in process_external
 self.create_dvr_fip_interfaces(ex_gw_port)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/l3/dvr_local_router.py",
 line 522, in create_dvr_fip_interfaces
 self.fip_ns.update_gateway_port(fip_agent_port)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/l3/dvr_fip_ns.py",
 line 243, in update_gateway_port
 ipd.route.add_gateway(gw_ip)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 690, in add_gateway
 self._as_root([ip_version], tuple(args))
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 361, in _as_root
 use_root_namespace=use_root_namespace)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 94, in _as_root
 log_fail_as_error=self.log_fail_as_error)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 103, in _execute
 log_fail_as_error=log_fail_as_error)
   File 
"/opt/stack/venv/neutron-20160927T090820Z/lib/python2.7/site-packages/neutron/agent/linux/utils.py",
 line 140, in execute
 raise RuntimeError(msg)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog mitaka-backport-potential newton-backport-potential

** Summary changed:

- Fix race conditions when trying to add default gateway for fip gateway port.
+ DVR: Fix race conditions when trying to add default gateway for fip gateway 
port.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631513

Title:
  DVR: Fix race conditions when trying to add default gateway for fip
  gateway port.

Status in neutron:
  New

Bug description:
  There seems to be a race condition when trying to add default gateway
  route in fip namespace for the fip agent gateway port.

  The way it happens is at high scale testing, when there is a router
  update that is currently happening for the Router-A which has a
  floatingip, a fip namespace is getting created and gateway ports
  plugged to the external bridge in the context of the fip namespace.
  While it is getting cr

[Yahoo-eng-team] [Bug 1631512] [NEW] run_tests.sh -m compress fails if python-memcached is not installed

2016-10-07 Thread Matthew Thode
Public bug reported:

bug was found in stable/newton, doesn't look like python-memcached was
installed into the venv by default.  I manually installed it and it
worked.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1631512

Title:
  run_tests.sh -m compress fails if python-memcached is not installed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  bug was found in stable/newton, doesn't look like python-memcached was
  installed into the venv by default.  I manually installed it and it
  worked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1631512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631481] [NEW] Revert resize does not delete instance directory with Ceph

2016-10-07 Thread Feodor Tersin
Public bug reported:

Resize revertion leaves instance directory on the second host with Ceph
image backend. As the result the second attempt to resize the instance
to the same host fails with n-cpu.log:

Traceback (most recent call last):
  File "/opt/stack/nova/nova/compute/manager.py", line 3942, in finish_resize
disk_info, image_meta)
  File "/opt/stack/nova/nova/compute/manager.py", line 3907, in _finish_resize
old_instance_type)
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
self.force_reraise()
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/compute/manager.py", line 3902, in _finish_resize
block_device_info, power_on)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 7185, in 
finish_migration
self._ensure_console_log_for_instance(instance)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2845, in 
_ensure_console_log_for_instance
libvirt_utils.file_open(console_file, 'a').close()
  File "/opt/stack/nova/nova/virt/libvirt/utils.py", line 313, in file_open
return open(*args, **kwargs)
IOError: [Errno 13] Permission denied: 
'/opt/stack/data/nova/instances/ad52ca5b-bb65-4f7c-87e8-750cb3cd9c5e/console.log'

$ ll ~/data/nova/instances/ad52ca5b-bb65-4f7c-87e8-750cb3cd9c5e/
total 24
-rw-rw-r--. 1 qemu  qemu 19342 Oct  7 21:23 console.log
-rw-rw-r--. 1 stack libvirtd  2762 Oct  7 21:22 libvirt.xml

Steps to reproduce:
1 Run 2-nodes devstack with Ceph image backend
2 Run an instance
 $ nova boot --image cirros-0.3.4-x86_64-disk --flavor t1.nano inst-1
3 Disable the instance host
 $ nova service-disable 172.16.1.10 nova-compute
4 Resize the instance to another host
 $ nova migrate inst-1
5 Revert resize
 $ nova resize-revert inst-1
6 Resize the instance again
 $ nova migrate inst-1
7 Check the instance state

Actual result - the instance is in error state.
Expected result - the instance is in verify_resize state.

Check n-cpu.log on the second node, where the instance was migrated.

This has been reproduced on master
commit 9c89e07d17b5eb441682e3b8fad8b270f37f7015
Merge: 870a77f 453e71d
Author: Jenkins 
Date:   Wed Oct 5 01:35:48 2016 +

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Resize revertion leaves instance directory on the second host with Ceph
  image backend. As the result the second attempt to resize the instance
  to the same host fails with n-cpu.log:
  
  Traceback (most recent call last):
-   File "/opt/stack/nova/nova/compute/manager.py", line 3942, in finish_resize
- disk_info, image_meta)
-   File "/opt/stack/nova/nova/compute/manager.py", line 3907, in _finish_resize
- old_instance_type)
-   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__
- self.force_reraise()
-   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
- six.reraise(self.type_, self.value, self.tb)
-   File "/opt/stack/nova/nova/compute/manager.py", line 3902, in _finish_resize
- block_device_info, power_on)
-   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 7185, in 
finish_migration
- self._ensure_console_log_for_instance(instance)
-   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2845, in 
_ensure_console_log_for_instance
- libvirt_utils.file_open(console_file, 'a').close()
-   File "/opt/stack/nova/nova/virt/libvirt/utils.py", line 313, in file_open
- return open(*args, **kwargs)
+   File "/opt/stack/nova/nova/compute/manager.py", line 3942, in finish_resize
+ disk_info, image_meta)
+   File "/opt/stack/nova/nova/compute/manager.py", line 3907, in _finish_resize
+ old_instance_type)
+   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__
+ self.force_reraise()
+   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
+ six.reraise(self.type_, self.value, self.tb)
+   File "/opt/stack/nova/nova/compute/manager.py", line 3902, in _finish_resize
+ block_device_info, power_on)
+   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 7185, in 
finish_migration
+ self._ensure_console_log_for_instance(instance)
+   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2845, in 
_ensure_console_log_for_instance
+ libvirt_utils.file_open(console_file, 'a').close()
+   File "/opt/stack/nova/nova/virt/libvirt/utils.py", line 313, in file_open
+ return open(*args, **kwargs)
  IOError: [Errno 13] Permission denied: 
'/opt/stack/data/nova/instances/ad52ca5b-bb65-4f7c-87e8-750cb3cd9c5e/console.log'
  
  $ ll ~/data/nova/instances/ad52ca5b-bb65-4f7c-87e8-750cb3cd9c5e/
  total 24
  -rw-rw-r--. 1 qemu  qemu 19342 Oct  7 21:23 console.log
  -rw-rw-r--. 1 stack libvirtd  2762 Oct  7 21:22 libvirt.xml
  
  Steps to reproduce:
  1 Run 

[Yahoo-eng-team] [Bug 1629797] Re: resolve service in nsswitch.conf adds 25 seconds to failed lookups before systemd-resolved is up

2016-10-07 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.8-15-g6e45ffb-
0ubuntu1

---
cloud-init (0.7.8-15-g6e45ffb-0ubuntu1) yakkety; urgency=medium

  * New upstream snapshot.
- systemd: Run cloud-init.service Before dbus.socket not dbus.target
  [Daniel Watkins] (LP: #1629797).

 -- Scott Moser   Fri, 07 Oct 2016 12:41:38 -0400

** Changed in: cloud-init (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1629797

Title:
  resolve service in nsswitch.conf adds 25 seconds to failed lookups
  before systemd-resolved is up

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in dbus package in Ubuntu:
  Triaged

Bug description:
  During boot, cloud-init does DNS resolution checks to if particular
  metadata services are available (in order to determine which cloud it
  is running on).  These checks happen before systemd-resolved is up[0]
  and if they resolve unsuccessfully they take 25 seconds to complete.

  This has substantial impact on boot time in all contexts, because
  cloud-init attempts to resolve three known-invalid addresses ("does-
  not-exist.example.com.", "example.invalid." and a random string) to
  enable it to detect when it's running in an environment where a DNS
  server will always return some sort of redirect.  As such, we're
  talking a minimum impact of 75 seconds in all environments.  This
  increases when cloud-init is configured to check for multiple
  environments.

  This means that yakkety is consistently taking 2-3 minutes to boot on
  EC2 and GCE, compared to the ~30 seconds of the first boot and ~10
  seconds thereafter in xenial.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1629797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631466] [NEW] GEt on /v2.0 fails with a 404

2016-10-07 Thread Matt Riedemann
Public bug reported:

Following the networking API reference to list versions of the network
API, I can list versions from the network endpoint like this:

http://developer.openstack.org/api-ref/networking/v2/?expanded=list-api-
versions-detail

And get details on each version like this:

http://developer.openstack.org/api-ref/networking/v2/?expanded=list-api-
versions-detail,show-api-v2-details-detail#show-api-v2-details

However, in practice, using master neutron:

stack@osc:/opt/stack/neutron$ git log -1
commit 80d4df144d62ce638ca7bdd228cdd116e34b3067
Merge: 3ade301 fc93f7f
Author: Jenkins 
Date:   Wed Oct 5 15:36:06 2016 +

Merge "Relocate Flavor and ServiceProfile DB models"
stack@osc:/opt/stack/neutron$


The 2nd route to get v2.0 details fails:

stack@osc:~$ curl -g -H "X-Auth-Token: $OS_TOKEN" http://9.5.127.82:9696/ | 
json_pp
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100   118  100   1180 0  33541  0 --:--:-- --:--:-- --:--:-- 39333
{
   "versions" : [
  {
 "id" : "v2.0",
 "links" : [
{
   "rel" : "self",
   "href" : "http://9.5.127.82:9696/v2.0";
}
 ],
 "status" : "CURRENT"
  }
   ]
}

stack@osc:~$ curl -g -H "X-Auth-Token: $OS_TOKEN" http://9.5.127.82:9696/v2.0
404 Not Found

The resource could not be found.

--

So either the docs are wrong, or the API is busted.

It looks like this is what should handle the /v2.0 route though:

https://github.com/openstack/neutron/blob/80d4df144d62ce638ca7bdd228cdd116e34b3067/neutron/api/v2/router.py#L45

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631466

Title:
  GEt on /v2.0 fails with a 404

Status in neutron:
  New

Bug description:
  Following the networking API reference to list versions of the network
  API, I can list versions from the network endpoint like this:

  http://developer.openstack.org/api-ref/networking/v2/?expanded=list-
  api-versions-detail

  And get details on each version like this:

  http://developer.openstack.org/api-ref/networking/v2/?expanded=list-
  api-versions-detail,show-api-v2-details-detail#show-api-v2-details

  However, in practice, using master neutron:

  stack@osc:/opt/stack/neutron$ git log -1
  commit 80d4df144d62ce638ca7bdd228cdd116e34b3067
  Merge: 3ade301 fc93f7f
  Author: Jenkins 
  Date:   Wed Oct 5 15:36:06 2016 +

  Merge "Relocate Flavor and ServiceProfile DB models"
  stack@osc:/opt/stack/neutron$

  
  The 2nd route to get v2.0 details fails:

  stack@osc:~$ curl -g -H "X-Auth-Token: $OS_TOKEN" http://9.5.127.82:9696/ | 
json_pp
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100   118  100   1180 0  33541  0 --:--:-- --:--:-- --:--:-- 39333
  {
 "versions" : [
{
   "id" : "v2.0",
   "links" : [
  {
 "rel" : "self",
 "href" : "http://9.5.127.82:9696/v2.0";
  }
   ],
   "status" : "CURRENT"
}
 ]
  }

  stack@osc:~$ curl -g -H "X-Auth-Token: $OS_TOKEN" http://9.5.127.82:9696/v2.0
  404 Not Found

  The resource could not be found.

  --

  So either the docs are wrong, or the API is busted.

  It looks like this is what should handle the /v2.0 route though:

  
https://github.com/openstack/neutron/blob/80d4df144d62ce638ca7bdd228cdd116e34b3067/neutron/api/v2/router.py#L45

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1631466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp



[Yahoo-eng-team] [Bug 1401437] Bug Cleanup

2016-10-07 Thread Sean McGinnis
Closing stale bug. This has be Incomplete status for over 90 days. If
this is still an issue please reopen.

** Changed in: python-cinderclient
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401437

Title:
  nova passes incorrect authentication info to cinderclient

Status in OpenStack Compute (nova):
  Fix Released
Status in python-cinderclient:
  Invalid

Bug description:
  There are multiple problems with the authentication information that
  nova/volume/cinder code passes to cinderclient:

  1. nova/volume/cinder.py passes 'cinder endpoint publicURL'  as the
  auth_url to cinderclient for credential authentication instead of the
  keystone auth_url .This happens here:

  get_cinder_client_version(context) sets the value for global CINDER_URL and 
passes it to
  c = cinder_client.Client(version,
   context.user_id,
   context.auth_token,
   project_id=context.project_id,
   auth_url=CINDER_URL,
   insecure=CONF.cinder.api_insecure,
   retries=CONF.cinder.http_retries,
   timeout=CONF.cinder.http_timeout,
   cacert=CONF.cinder.ca_certificates_file)

  c.client.auth_token = context.auth_token or '%s:%s' % (context.user_id,
 context.project_id)
  

  Under normal circumstances ( i e in cases where the context has
  auth_token) , the auth_url is never used/required. So this is required
  only when the token expires and an attempt to do fresh authentication
  is made here:

  def _cs_request(self, url, method, **kwargs):
  auth_attempts = 0
  attempts = 0
  backoff = 1
  while True:
  attempts += 1
  if not self.management_url or not self.auth_token:
  self.authenticate()
  kwargs.setdefault('headers', {})['X-Auth-Token'] = self.auth_token
  if self.projectid:
  kwargs['headers']['X-Auth-Project-Id'] = self.projectid
  try:
  resp, body = self.request(self.management_url + url, method,
**kwargs)
  return resp, body
  except exceptions.BadRequest as e:
  if attempts > self.retries:
  raise
  except exceptions.Unauthorized:
  if auth_attempts > 0:
  raise
  self._logger.debug("Unauthorized, reauthenticating.")
  self.management_url = self.auth_token = None
  # First reauth. Discount this attempt.
  attempts -= 1
  auth_attempts += 1
  continue

  
  2. nova/volume.cinderclient.py >> cinderclient method passes 
context.auth_token instead of the password.Due to this HttpClient.password 
attribute is set to the auth token instead of the password. 

  3. There are other problems around this which is summarized as below:

  cinderclient should really support a way of passing an auth_token in
  on the __init__ so it is explicitly supported for the caller to
  specify an auth_token, rather than forcing this hack that nova is
  currently using of setting the auth_token itself after creating the
  cinderclient instance. That's not strictly required, but it would be a
  much better design. At that point, cinderclient should also stop
  requiring the auth_url parameter (it currently raises an exception if
  that isn't specified) if an auth_token is specified and retries==0,
  since in that case the auth_url would never be used. Userid and
  password would also not be required in that case.

  nova needs to either start passing a valid userid and password and a
  valid auth_url so that retries will work, or stop setting retries to a
  non-zero number (it's using a conf setting to determine the number of
  retries, and the default is 3). If the decision is to get retries
  working, then we have to figure out what to pass for the userid and
  password. Nova won't know the end-user's user/password that correspond
  to the auth_token it initially uses, and we wouldn't want to be using
  a different user on retries than we do on the initial requests, so I
  don't think retries should be supported unless nova is going to make
  ALL requests with a service userid rather than with the end-user's
  userid... and I don't think that fits with the current OpenStack
  architecture. So that leaves us with not supporting retries. In that
  case, nova should still stop passing the auth_token in as the password
  so that someone doesn't stumble over that later when retry support is
  added. Similarly for the a

[Yahoo-eng-team] [Bug 1631371] Re: [RFE] Expose trunk details over metadata API

2016-10-07 Thread Armando Migliaccio
My suggestion for you would be to revise/simplify the problem statement,
but I feel that until we get to close the circle on the Nova side,
there's hardly anything we can do on the Neutron side.

** Changed in: neutron
   Status: Won't Fix => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631371

Title:
  [RFE] Expose trunk details over metadata API

Status in neutron:
  Incomplete

Bug description:
  Enable bringup of subports via exposing trunk/subport details over
  the metadata API

  With the completion of the trunk port feature in Newton (Neutron
  bp/vlan-aware-vms [1]), trunk and subports are now available. But the
  bringup of the subports' VLAN interfaces inside an instance is not
  automatic. In Newton there's no easy way to pass information about
  the subports to the guest operating system. But using the metadata
  API we can change this.

  Problem Description
  ---

  To bring up (and/or tear down) a subport the guest OS

  (a) must know the segmentation-type and segmentation-id of a subport
  as set in 'openstack network trunk create/set --subport'

  (b) must know the MAC address of a subport
  as set in 'openstack port create'

  (c) must know which vNIC the subport belongs to

  (d) may need to know when were subports added or removed
  (if they are added or removed during the lifetime of an instance)

  Since subports do not have a corresponding vNIC, the approach used
  for regular ports (with a vNIC) cannot work.

  This write-up addresses problems (a), (b) and (c), but not (d).

  Proposed Change
  ---

  Here we propose a change involving both Nova and Neutron to expose
  the information needed via the metadata API.

  Information covering (a) and (b) is already available (read-only)
  in the 'trunk_details' attribute of the trunk parent port (ie. the
  port which the instance was booted with). [2]

  We propose to use the MAC address of the trunk parent port to cover
  (c). We recognize this may occasionally be problematic, because MAC
  addresses (of ports belonging to different neutron networks) are not
  guaranteed to be unique, therefore collision may happen. But this seems
  to be a small price for avoiding the complexity of other solutions.

  The mechanism would be the following. Let's suppose we have port0
  which is a trunk parent port and instance0 was booted with '--nic
  port-id=port0'. On every update of port0's trunk_details Neutron
  constructs the following JSON structure:

  PORT0-DETAILS = {
  "mac_address": PORT0-MAC-ADDRESS, "trunk_details":
  PORT0-TRUNK-DETAILS
  }

  Then Neutron sets a metadata key-value pair of instance0, equivalent
  to the following nova command:

  nova meta set instance0 trunk_details::PORT0-MAC-ADDRESS=PORT0-DETAILS

  Nova in Newton limits meta values to <= 255 characters, this limit
  must be raised. Assuming the current format of trunk_details roughly
  150 characters/subport are needed. Alternatively meta values could
  have unlimited length - at least for the service tenant used by
  Neutron. (Though tenant-specific API validators may not be a good
  idea.) The 'values' column of the the 'instance_metadata' table should
  be altered from VARCHAR(255) to TEXT() in a Nova DB migration.
  (A slightly related bug report: [3])

  A program could read
  http://169.254.169.254/openstack/2016-06-30/meta_data.json and
  bring up the subport VLAN interfaces accordingly. This program is
  not covered here, however it is worth pointing out that it could be
  called by cloud-init.

  Alternatives
  

  (1) The MAC address of a parent port can be reused for all its child
  ports (when creating the child ports). Then VLAN subinterfaces
  of a network interface will have the correct MAC address by
  default. Segmentation type and ID can be shared in other ways, for
  example as a VLAN plan embedded into a golden image. This approach
  could even partially solve problem (d), however it cannot solve problem
  (a) in the dynamic case. Use of this approach is currently blocked
  by an openvswitch firewall driver bug. [4][5]

  (2) Generate and inject a subport bringup script into the instance
  as user data. Cannot handle subports added or removed after VM boot.

  (3) An alternative solution to problem (c) could rely on the
  preservation of ordering between NICs passed to nova boot and NICs
  inside an instance. However this would turn the update of trunk_details
  into an instance-level operation instead of the port-level operation
  proposed here. Plus it would fail if this ordering is ever lost.

  References
  --

  [1] https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
  [2] 
https://review.openstack.org/#q,Id23ce8fc16c6ea6a405cb8febf8470a5bf3bcb43,n,z
  [3] https://bugs.launchpad.net/nova/+bug/1117923
  [4] https://bugs.launchpad.net/neutron

[Yahoo-eng-team] [Bug 1631432] [NEW] port-update fails if allowed_address_pair is not a dict

2016-10-07 Thread Tomasz Głuch
Public bug reported:

CLI help is misleading. Neutron port-update called with parameters
according to documentation returns an error.

neutron help port-update
  ..
  --allowed-address-pair ip_address=IP_ADDR[,mac_address=MAC_ADDR]
Allowed address pair associated with the port.You can
repeat this option.

# neutron port-update 3f36328f-0629-4e41-afa8-e2992815bcd0 
--allowed-address-pairs ip_address=10.0.0.1
The number of allowed address pair exceeds the maximum 10.
Neutron server returns request_ids: ['req-62e258cc-d47d-4ab7-8e69-a13c50865042']

Work correctly when specific data type is enforced:
# neutron port-update 3f36328f-0629-4e41-afa8-e2992815bcd0 
--allowed-address-pairs type=dict list=true ip_address=10.0.0.1
Updated port: 3f36328f-0629-4e41-afa8-e2992815bcd0

It always should be a list of dict, even when only one pair is given.

CLI doc should be corrected.

Furthermore, input data in neutron-server seem to be not validated correctly. 
The reason of misleading exception about exceeded number of address pairs is an 
implicit test of length of user data. In case of list of dict it is a number of 
elements of list - number of address pairs. When only one pair is given, it 
returns length of string "ip_address=10.0.0.1" == 20 what is greater than 10. 
There is a try-except clause for TypeError exception, but it is not thrown in 
this case.
This bug is observed if there is no other pairs already defined on given port. 
In other case lists are merged and type error is thrown.

def _validate_allowed_address_pairs(address_pairs, valid_values=None):
..
try:
if len(address_pairs) > cfg.CONF.max_allowed_address_pair:
raise AllowedAddressPairExhausted(
quota=cfg.CONF.max_allowed_address_pair)
except TypeError:
raise webob.exc.HTTPBadRequest(
_("Allowed address pairs must be a list."))

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631432

Title:
  port-update fails if allowed_address_pair is not a dict

Status in neutron:
  New

Bug description:
  CLI help is misleading. Neutron port-update called with parameters
  according to documentation returns an error.

  neutron help port-update
..
--allowed-address-pair ip_address=IP_ADDR[,mac_address=MAC_ADDR]
  Allowed address pair associated with the port.You can
  repeat this option.

  # neutron port-update 3f36328f-0629-4e41-afa8-e2992815bcd0 
--allowed-address-pairs ip_address=10.0.0.1
  The number of allowed address pair exceeds the maximum 10.
  Neutron server returns request_ids: 
['req-62e258cc-d47d-4ab7-8e69-a13c50865042']

  Work correctly when specific data type is enforced:
  # neutron port-update 3f36328f-0629-4e41-afa8-e2992815bcd0 
--allowed-address-pairs type=dict list=true ip_address=10.0.0.1
  Updated port: 3f36328f-0629-4e41-afa8-e2992815bcd0

  It always should be a list of dict, even when only one pair is given.

  CLI doc should be corrected.

  Furthermore, input data in neutron-server seem to be not validated correctly. 
The reason of misleading exception about exceeded number of address pairs is an 
implicit test of length of user data. In case of list of dict it is a number of 
elements of list - number of address pairs. When only one pair is given, it 
returns length of string "ip_address=10.0.0.1" == 20 what is greater than 10. 
There is a try-except clause for TypeError exception, but it is not thrown in 
this case.
  This bug is observed if there is no other pairs already defined on given 
port. In other case lists are merged and type error is thrown.

  def _validate_allowed_address_pairs(address_pairs, valid_values=None):
  ..
  try:
  if len(address_pairs) > cfg.CONF.max_allowed_address_pair:
  raise AllowedAddressPairExhausted(
  quota=cfg.CONF.max_allowed_address_pair)
  except TypeError:
  raise webob.exc.HTTPBadRequest(
  _("Allowed address pairs must be a list."))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1631432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1559543] Re: cloud-init does not configure or start networking on gentoo

2016-10-07 Thread Matthew Thode
it starts it, just was installing it into the default runlevel instead
of boot (see the gentoo bug)

** Changed in: cloud-init
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1559543

Title:
  cloud-init does not configure or start networking on gentoo

Status in cloud-init:
  Fix Released
Status in cloud-init package in Gentoo Linux:
  Unknown

Bug description:
  the version of cloud-init I used was 0.7.6 as there are no newer
  versions to test with

  you can build an image to test with with diskimage-builder if you wish
  to test

  I'm also at castle so let me know if you want to meet up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1559543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631430] [NEW] Only one entry created for nova-osapi_compute and nova-metadata services in multi-n-api environment

2016-10-07 Thread Sujitha
Public bug reported:

Description
===

In nova.conf, osapi_compute_listen and metadata_listen are defaulted to 
0.0.0.0. Due to this when multiple n-api nodes are setup in a multinode 
environment, only one entry is
created for nova-osapi_compute and nova-metadata services in nova.services 
table.

This doesn't have any effect when n-api's on both nodes have same version. But 
when n-api on node 1 is upgraded, then n-api on node2 refuses to start due to 
Service version too Old exception.
This behavior can be changed by having two entries for these services in db.

Steps to reproduce
==

1. Setup multinode devstack environment with all controller services on one 
node and n-api & n-cpu on another node.
2. Now you have two n-api nodes.
3. Check nova.services table.

Expected result
===

There should be two entries for nova-osapi_compute and nova-metadata
services in nova.services table one for each host.

Actual result
=

Only one entry created for these services.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1631430

Title:
  Only one entry created for nova-osapi_compute and nova-metadata
  services in multi-n-api environment

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  In nova.conf, osapi_compute_listen and metadata_listen are defaulted to 
0.0.0.0. Due to this when multiple n-api nodes are setup in a multinode 
environment, only one entry is
  created for nova-osapi_compute and nova-metadata services in nova.services 
table.

  This doesn't have any effect when n-api's on both nodes have same version. 
But when n-api on node 1 is upgraded, then n-api on node2 refuses to start due 
to Service version too Old exception.
  This behavior can be changed by having two entries for these services in db.

  Steps to reproduce
  ==

  1. Setup multinode devstack environment with all controller services on one 
node and n-api & n-cpu on another node.
  2. Now you have two n-api nodes.
  3. Check nova.services table.

  Expected result
  ===

  There should be two entries for nova-osapi_compute and nova-metadata
  services in nova.services table one for each host.

  Actual result
  =

  Only one entry created for these services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1631430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580899] Re: Overlapped new router interface cannot remove

2016-10-07 Thread Ihar Hrachyshka
*** This bug is a duplicate of bug 1523859 ***
https://bugs.launchpad.net/bugs/1523859

** This bug has been marked a duplicate of bug 1523859
   Failing router interface add changes port device_id/device_owner attributes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580899

Title:
  Overlapped new router interface cannot remove

Status in neutron:
  Fix Released

Bug description:
  To reproduce the bug:
  1. Create a router (could be any type: legacy, HA, DVR, DVR + HA)
  2. Create network1 with subnet1 192.168.111.0/24 gateway 192.168.111.1
  3. Create network2 with subnet2 192.168.111.0/24 gateway 192.168.111.254
  4. Create router interface to subnet1 by using subnet1's subnet_id
  5. Create a port1 with IP 192.168.111.254 in subnet2
  6. Create router interface to subnet2 by using port1's port_id

  Then you will get some API exception like:
  Bad router request: Cidr 192.168.111.0/24 of subnet 
4c230a4b-7d34-4e11-9351-4fa720c94004
  overlaps with cidr 192.168.111.0/24 of subnet 
ba280c8a-c761-407d-bcfe-741dae8a37d3.
  Neutron server returns request_ids: 
['req-d6488a58-44a8-40c8-8e9e-fad94e43bafd']

  And finally the port1 will never be able to delete.
  Exception trace:
  2016-05-12 15:13:49.847 20792 ERROR neutron.callbacks.manager 
[req-5274c5a0-790d-463f-a6cd-8334b9944a60 3024b3c2f2da48fbbf426084b0706f84 
5ff1da9c235c4ebcaefeecf3fff7eb11 - - -] Error during notification for 
neutron.db.l3_db._prevent_l3_port_delete_callback port, before_delete
  2016-05-12 15:13:49.847 20792 ERROR neutron.callbacks.manager Traceback (most 
recent call last):
  2016-05-12 15:13:49.847 20792 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/callbacks/manager.py", line 146, in 
_notify_loop
  2016-05-12 15:13:49.847 20792 ERROR neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2016-05-12 15:13:49.847 20792 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_db.py", line 1851, in 
_prevent_l3_port_delete_callback
  2016-05-12 15:13:49.847 20792 ERROR neutron.callbacks.manager 
l3plugin.prevent_l3_port_deletion(context, port_id)
  2016-05-12 15:13:49.847 20792 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_db.py", line 1451, in 
prevent_l3_port_deletion
  2016-05-12 15:13:49.847 20792 ERROR neutron.callbacks.manager 
reason=reason)
  2016-05-12 15:13:49.847 20792 ERROR neutron.callbacks.manager 
ServicePortInUse: Port cbccd81c-d8c0-4dfd-bff3-24c4edeb6825 cannot be deleted 
directly via the port API: has device owner network:router_interface.
  2016-05-12 15:13:49.847 20792 ERROR neutron.callbacks.manager
  2016-05-12 15:13:49.849 20792 INFO neutron.api.v2.resource 
[req-5274c5a0-790d-463f-a6cd-8334b9944a60 3024b3c2f2da48fbbf426084b0706f84 
5ff1da9c235c4ebcaefeecf3fff7eb11 - - -] delete failed (client error): Port 
cbccd81c-d8c0-4dfd-bff3-24c4edeb6825 cannot be deleted directly via the port 
API: has device owner network:router_interface.

  
  And the this port will also can not remove by router_interface_remove:

  2016-05-12 16:20:28.535 21756 INFO neutron.api.v2.resource 
[req-3ba33215-e09e-423d-a7aa-07cdd66fea61 3024b3c2f2da48fbbf426084b0706f84 
5ff1da9c235c4ebcaefeecf3fff7eb11 - - -] remove_router_interface failed (client 
error): Router 3895a472-a64c-424e-b0c9-0f610db88f67 does not have an interface 
with id 1760df10-f5f0-4182-a0cc-a144d5aa46c5
  2016-05-12 16:20:28.536 21756 INFO neutron.wsgi 
[req-3ba33215-e09e-423d-a7aa-07cdd66fea61 3024b3c2f2da48fbbf426084b0706f84 
5ff1da9c235c4ebcaefeecf3fff7eb11 - - -] 172.16.5.10 - - [12/May/2016 16:20:28] 
"PUT 
/v2.0/routers/3895a472-a64c-424e-b0c9-0f610db88f67/remove_router_interface.json 
HTTP/1.1" 404 418 0.173747

  
  This patch introduces this new bug:
  https://bugs.launchpad.net/neutron/+bug/1475093
  https://review.openstack.org/#/c/202357/

  Some bugs maybe related:
  https://bugs.launchpad.net/bgpvpn/+bug/1537067
  https://bugs.launchpad.net/neutron/+bug/1537091

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629797] Re: resolve service in nsswitch.conf adds 25 seconds to failed lookups before systemd-resolved is up

2016-10-07 Thread Scott Moser
** Changed in: cloud-init (Ubuntu)
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1629797

Title:
  resolve service in nsswitch.conf adds 25 seconds to failed lookups
  before systemd-resolved is up

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  In Progress
Status in dbus package in Ubuntu:
  Triaged

Bug description:
  During boot, cloud-init does DNS resolution checks to if particular
  metadata services are available (in order to determine which cloud it
  is running on).  These checks happen before systemd-resolved is up[0]
  and if they resolve unsuccessfully they take 25 seconds to complete.

  This has substantial impact on boot time in all contexts, because
  cloud-init attempts to resolve three known-invalid addresses ("does-
  not-exist.example.com.", "example.invalid." and a random string) to
  enable it to detect when it's running in an environment where a DNS
  server will always return some sort of redirect.  As such, we're
  talking a minimum impact of 75 seconds in all environments.  This
  increases when cloud-init is configured to check for multiple
  environments.

  This means that yakkety is consistently taking 2-3 minutes to boot on
  EC2 and GCE, compared to the ~30 seconds of the first boot and ~10
  seconds thereafter in xenial.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1629797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555320] Re: "Migration for instance 0763227e-e192-4e0b-a49d-0ea0b181fca6 refers to another host's instance!" should not be an error

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/382195
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f085fbd7d3bfdf016a37ccc7e6e28786425f0e4e
Submitter: Jenkins
Branch:master

commit f085fbd7d3bfdf016a37ccc7e6e28786425f0e4e
Author: Timofey Durakov 
Date:   Tue Oct 4 16:42:09 2016 +0300

Change log level to debug for migrations pairing

For resize/cold-migration it's possible that instance
already changed host to destination, but no confirm/revert
has happened yet. In that case resource tracked starts spamming
errors, because it's impossible to match migration and instance.
It's safe to lower log level to debug in that case.

Change-Id: I70cb7426e0e2849ee7d01205ce7b2d883a126d66
Closes-Bug: #1555320


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1555320

Title:
  "Migration for instance 0763227e-e192-4e0b-a49d-0ea0b181fca6 refers to
  another host's instance!" should not be an error

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This happens in a periodic task after migrations complete:

  http://logs.openstack.org/15/290715/1/check/gate-tempest-dsvm-
  multinode-
  full/03f92ff/logs/screen-n-cpu.txt.gz?level=INFO#_2016-03-09_18_09_00_775

  http://logs.openstack.org/15/290715/1/check/gate-tempest-dsvm-
  multinode-
  full/03f92ff/logs/screen-n-cpu.txt.gz?level=INFO#_2016-03-09_18_09_01_748

  It shouldn't be an error log since it's normal, see how many times it
  hits in multinode gate runs:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Migration%20for%20instance%5C%22%20AND%20message%3A%5C%22refers%20to%20another%20host's%20instance!%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22&from=7d

  There are over 2500 hits in 7 days on this error message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1555320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631371] Re: [RFE] Expose trunk details over metadata API

2016-10-07 Thread Armando Migliaccio
There are a couple of issues right now:

a) If a trunk dynamically changes after a consumer of the Neutron API
access trunk-details, the consumer won't know unless it polls.

b) From within the guest there is no good way of knowing whether a trunk
is dynamically changing after boot. Mind you, it's doable but it's not
clean.

While I think a) may be worth addressing in the form of a os-external
notification (which should not require an RFE), I don't see how Neutron
should be in the business of dealing with b). Shortcutting components
together is usually a manifestation of tight coupling, and no-one wants
that.


** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631371

Title:
  [RFE] Expose trunk details over metadata API

Status in neutron:
  Won't Fix

Bug description:
  Enable bringup of subports via exposing trunk/subport details over
  the metadata API

  With the completion of the trunk port feature in Newton (Neutron
  bp/vlan-aware-vms [1]), trunk and subports are now available. But the
  bringup of the subports' VLAN interfaces inside an instance is not
  automatic. In Newton there's no easy way to pass information about
  the subports to the guest operating system. But using the metadata
  API we can change this.

  Problem Description
  ---

  To bring up (and/or tear down) a subport the guest OS

  (a) must know the segmentation-type and segmentation-id of a subport
  as set in 'openstack network trunk create/set --subport'

  (b) must know the MAC address of a subport
  as set in 'openstack port create'

  (c) must know which vNIC the subport belongs to

  (d) may need to know when were subports added or removed
  (if they are added or removed during the lifetime of an instance)

  Since subports do not have a corresponding vNIC, the approach used
  for regular ports (with a vNIC) cannot work.

  This write-up addresses problems (a), (b) and (c), but not (d).

  Proposed Change
  ---

  Here we propose a change involving both Nova and Neutron to expose
  the information needed via the metadata API.

  Information covering (a) and (b) is already available (read-only)
  in the 'trunk_details' attribute of the trunk parent port (ie. the
  port which the instance was booted with). [2]

  We propose to use the MAC address of the trunk parent port to cover
  (c). We recognize this may occasionally be problematic, because MAC
  addresses (of ports belonging to different neutron networks) are not
  guaranteed to be unique, therefore collision may happen. But this seems
  to be a small price for avoiding the complexity of other solutions.

  The mechanism would be the following. Let's suppose we have port0
  which is a trunk parent port and instance0 was booted with '--nic
  port-id=port0'. On every update of port0's trunk_details Neutron
  constructs the following JSON structure:

  PORT0-DETAILS = {
  "mac_address": PORT0-MAC-ADDRESS, "trunk_details":
  PORT0-TRUNK-DETAILS
  }

  Then Neutron sets a metadata key-value pair of instance0, equivalent
  to the following nova command:

  nova meta set instance0 trunk_details::PORT0-MAC-ADDRESS=PORT0-DETAILS

  Nova in Newton limits meta values to <= 255 characters, this limit
  must be raised. Assuming the current format of trunk_details roughly
  150 characters/subport are needed. Alternatively meta values could
  have unlimited length - at least for the service tenant used by
  Neutron. (Though tenant-specific API validators may not be a good
  idea.) The 'values' column of the the 'instance_metadata' table should
  be altered from VARCHAR(255) to TEXT() in a Nova DB migration.
  (A slightly related bug report: [3])

  A program could read
  http://169.254.169.254/openstack/2016-06-30/meta_data.json and
  bring up the subport VLAN interfaces accordingly. This program is
  not covered here, however it is worth pointing out that it could be
  called by cloud-init.

  Alternatives
  

  (1) The MAC address of a parent port can be reused for all its child
  ports (when creating the child ports). Then VLAN subinterfaces
  of a network interface will have the correct MAC address by
  default. Segmentation type and ID can be shared in other ways, for
  example as a VLAN plan embedded into a golden image. This approach
  could even partially solve problem (d), however it cannot solve problem
  (a) in the dynamic case. Use of this approach is currently blocked
  by an openvswitch firewall driver bug. [4][5]

  (2) Generate and inject a subport bringup script into the instance
  as user data. Cannot handle subports added or removed after VM boot.

  (3) An alternative solution to problem (c) could rely on the
  preservation of ordering between NICs passed to nova boot and NICs
  inside an instance. However this would turn the update of tr

[Yahoo-eng-team] [Bug 1599086] Re: Security groups: exception under load

2016-10-07 Thread Ihar Hrachyshka
We can't bump the oslo.db version in Mitaka. As for Newton, it's already
>=4.10.0. I guess we can close the bug.

** Changed in: neutron
   Status: In Progress => Won't Fix

** Tags removed: mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599086

Title:
  Security groups: exception under load

Status in neutron:
  Won't Fix
Status in oslo.db:
  Fix Committed

Bug description:
  
  For one of the iteration, adding router interface failed showing the below DB 
error.

  2016-07-04 17:12:59.057 ERROR neutron.api.v2.resource 
[req-33bb4fd7-25a5-4460-82d0-ab5e5b8d574c 
ctx_rally_8204b9df57e44bcf9804a278c35bf2a4_user_0 
8204b9df57e44bcf9804a278c35bf2a4] add_router_interface failed
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource self.force_reraise()
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 217, in _handle_action
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource ret_value = 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 1509, in 
add_router_interface
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource interface=info)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/dhcp_meta/rpc.py", line 121, in 
handle_router_metadata_access
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource plugin, 
ctx_elevated, router_id)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/dhcp_meta/rpc.py", line 171, in 
_create_metadata_access_network
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource {'network': 
net_data})
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 452, in 
create_network
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
self._ensure_default_security_group(context, tenant_id)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_db.py", line 710, in 
_ensure_default_security_group
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource return 
self._create_default_security_group(context, tenant_id)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/securitygroups_db.py", line 721, in 
_create_default_security_group
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource context, 
security_group, default_sg=True)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 1759, in 
create_security_group
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
firewall.delete_section(section_id)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource self.force_reraise()
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 1736, in 
create_security_group
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource context, 
security_group, default_sg))
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource

[Yahoo-eng-team] [Bug 1546110] Re: DB error causes router rescheduling loop to fail

2016-10-07 Thread Ihar Hrachyshka
** Tags removed: neutron-proactive-backport-potential

** No longer affects: neutron/kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546110

Title:
  DB error causes router rescheduling loop to fail

Status in neutron:
  Fix Released

Bug description:
  In router rescheduling looping task db call to get down bindings is
  done outside of try/except block which may cause task to fail (see
  traceback below). Need to put db operation inside try/except.

  2016-02-15T10:44:44.259995+00:00 err: 2016-02-15 10:44:44.250 15419 ERROR 
oslo.service.loopingcall [req-79bce4c3-2e81-446c-8b37-6d30e3a964e2 - - - - -] 
Fixed interval looping call 
'neutron.services.l3_router.l3_router_plugin.L3RouterPlugin.reschedule_routers_from_down_agents'
 failed
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 113, in 
_run_loop
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_agentschedulers_db.py", line 
101, in reschedule_routers_from_down_agents
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall down_bindings = 
self._get_down_bindings(context, cutoff)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_dvrscheduler_db.py", line 460, 
in _get_down_bindings
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall context, cutoff)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_agentschedulers_db.py", line 
149, in _get_down_bindings
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return 
query.all()
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2399, in all
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return list(self)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2516, in 
__iter__
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return 
self._execute_and_instances(context)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2529, in 
_execute_and_instances
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
close_with_result=True)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2520, in 
_connection_from_session
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall **kw)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 882, in 
connection
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
execution_options=execution_options)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 889, in 
_connection_for_bind
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall conn = 
engine.contextual_connect(**kw)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2039, in 
contextual_connect
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
self._wrap_pool_connect(self.pool.connect, None),
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2078, in 
_wrap_pool_connect
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall e, dialect, self)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1401, in 
_handle_dbapi_exception_noconnection
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
util.raise_from_cause(newraise, exc_info)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 199, in 
raise_from_cause
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall 
reraise(type(exception), exception, tb=exc_tb)
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2074, in 
_wrap_pool_connect
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall return fn()
  2016-02-15 10:44:44.250 15419 ERROR oslo.service.loopingcall File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", li

[Yahoo-eng-team] [Bug 1630929] Re: Affinity instance slot reservation

2016-10-07 Thread Sylvain Bauza
Nova doesn't have (yet) in its scope to manage instance allocations and
that really looks like a new feature that should be discussed in a spec.

http://docs.openstack.org/developer/nova/process.html#how-do-i-get-my-
code-merged

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630929

Title:
  Affinity instance slot reservation

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Currently ServerGroupAffinityFilter schedules all instances of a group
  on the same host, but it doesn't reserve any slots for future
  instances on the host.

  Example:
  max_instances_per_host=10
  quota_server_group_members=5

  When user_1 would spawns 3 instances with affinity server group policy, all 
the instances will be scheduled on the same host (host_A). 
  If user_2 also spawns instances and they are placed on host_A, quota of 
max_instances_per_host will be reached, so user_1 can not add 2 new instances 
to the same server group and error "No valid host found" will be returned.

  My proposition is to add new parameters to nova.conf to configure 
ServerGroupAffinityFilter:
  - enable_slots_reservation (Boolean)
  - reserved_slots_per_instance (-1 will count difference between 
max_instances_per_host and quota_server_group_members ; values bigger than 0 
will reserve the indicated number of ports per group)

  Nova scheduler checks if on a host there are any instances with
  affinity policy and based on that, it counts available slots.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631004] Re: neutron-netns-cleanup doesn't work with ip_gre kernel module

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/382966
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0e5caebd452518b6be9c99248187ddc9926818ce
Submitter: Jenkins
Branch:master

commit 0e5caebd452518b6be9c99248187ddc9926818ce
Author: Jakub Libosvar 
Date:   Thu Oct 6 09:49:14 2016 -0400

Ignore gre0 and gretap0 devices in netns cleanup script.

This is tested by

http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/functional/cmd/test_netns_cleanup.py?h=9.0.0#n49

Change-Id: I24ac257cafc7a2617215f1072509e70e40d23fea
Closes-Bug: #1631004


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631004

Title:
  neutron-netns-cleanup doesn't work with ip_gre kernel module

Status in neutron:
  Fix Released

Bug description:
  Kernel patch
  
https://github.com/torvalds/linux/commit/b2acd1dc3949cd60c571844d495594f05f0351f4
  introduced a dependency of openvswitch on ip_gre. ip_gre kernel module
  creates gre0 and gretap0 interfaces that are present in all network
  namespaces (they are note separated).

  This makes neutron-netns-cleanup not deleting namespace as it still
  thinks there are devices in namespace (gre0, gretap0).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1631004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631384] [NEW] New option for num_threads for state change server

2016-10-07 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/379578
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 5c1516e1fb07bb3e026a3569396567c3907b31c6
Author: venkata anil 
Date:   Tue May 17 16:30:13 2016 +

New option for num_threads for state change server

Currently max number of client connections(i.e greenlets spawned at
a time) opened at any time by the WSGI server is set to 100 with
wsgi_default_pool_size[1].

This configuration may be fine for neutron api server. But with
wsgi_default_pool_size(=100) requests, state change server
is creating heavy cpu load on agent.
So this server(which run on agents) need lesser value i.e
can be configured to half the number of cpu on agent

We use "ha_keepalived_state_change_server_threads" config option
to configure number of threads in state change server instead of
wsgi_default_pool_size.

[1] https://review.openstack.org/#/c/278007/

DocImpact: Add new config option -
ha_keepalived_state_change_server_threads, to configure number
of threads in state change server.

Closes-Bug: #1581580
Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993
(cherry picked from commit 70ea188f5d87c45fb60ace8b8405274e5f6dd489)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631384

Title:
  New option for num_threads for state change server

Status in neutron:
  New

Bug description:
  https://review.openstack.org/379578
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 5c1516e1fb07bb3e026a3569396567c3907b31c6
  Author: venkata anil 
  Date:   Tue May 17 16:30:13 2016 +

  New option for num_threads for state change server
  
  Currently max number of client connections(i.e greenlets spawned at
  a time) opened at any time by the WSGI server is set to 100 with
  wsgi_default_pool_size[1].
  
  This configuration may be fine for neutron api server. But with
  wsgi_default_pool_size(=100) requests, state change server
  is creating heavy cpu load on agent.
  So this server(which run on agents) need lesser value i.e
  can be configured to half the number of cpu on agent
  
  We use "ha_keepalived_state_change_server_threads" config option
  to configure number of threads in state change server instead of
  wsgi_default_pool_size.
  
  [1] https://review.openstack.org/#/c/278007/
  
  DocImpact: Add new config option -
  ha_keepalived_state_change_server_threads, to configure number
  of threads in state change server.
  
  Closes-Bug: #1581580
  Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993
  (cherry picked from commit 70ea188f5d87c45fb60ace8b8405274e5f6dd489)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1631384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631374] [NEW] New option for num_threads for state change server

2016-10-07 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/379582
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 4387d4aedfa2ccf129eb957ee057ad9c9edac82d
Author: venkata anil 
Date:   Tue May 17 16:30:13 2016 +

New option for num_threads for state change server

Currently max number of client connections(i.e greenlets spawned at
a time) opened at any time by the WSGI server is set to 100 with
wsgi_default_pool_size[1].

This configuration may be fine for neutron api server. But with
wsgi_default_pool_size(=100) requests, state change server
is creating heavy cpu load on agent.
So this server(which run on agents) need lesser value i.e
can be configured to half the number of cpu on agent

We use "ha_keepalived_state_change_server_threads" config option
to configure number of threads in state change server instead of
wsgi_default_pool_size.

[1] https://review.openstack.org/#/c/278007/

DocImpact: Add new config option -
ha_keepalived_state_change_server_threads, to configure number
of threads in state change server.

Closes-Bug: #1581580
Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993
(cherry picked from commit 70ea188f5d87c45fb60ace8b8405274e5f6dd489)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631374

Title:
  New option for num_threads for state change server

Status in neutron:
  New

Bug description:
  https://review.openstack.org/379582
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 4387d4aedfa2ccf129eb957ee057ad9c9edac82d
  Author: venkata anil 
  Date:   Tue May 17 16:30:13 2016 +

  New option for num_threads for state change server
  
  Currently max number of client connections(i.e greenlets spawned at
  a time) opened at any time by the WSGI server is set to 100 with
  wsgi_default_pool_size[1].
  
  This configuration may be fine for neutron api server. But with
  wsgi_default_pool_size(=100) requests, state change server
  is creating heavy cpu load on agent.
  So this server(which run on agents) need lesser value i.e
  can be configured to half the number of cpu on agent
  
  We use "ha_keepalived_state_change_server_threads" config option
  to configure number of threads in state change server instead of
  wsgi_default_pool_size.
  
  [1] https://review.openstack.org/#/c/278007/
  
  DocImpact: Add new config option -
  ha_keepalived_state_change_server_threads, to configure number
  of threads in state change server.
  
  Closes-Bug: #1581580
  Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993
  (cherry picked from commit 70ea188f5d87c45fb60ace8b8405274e5f6dd489)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1631374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631375] [NEW] New option for num_threads for state change server

2016-10-07 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/379580
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 039ab16f4ea9308792d4168adb1d501310145346
Author: venkata anil 
Date:   Tue May 17 16:30:13 2016 +

New option for num_threads for state change server

Currently max number of client connections(i.e greenlets spawned at
a time) opened at any time by the WSGI server is set to 100 with
wsgi_default_pool_size[1].

This configuration may be fine for neutron api server. But with
wsgi_default_pool_size(=100) requests, state change server
is creating heavy cpu load on agent.
So this server(which run on agents) need lesser value i.e
can be configured to half the number of cpu on agent

We use "ha_keepalived_state_change_server_threads" config option
to configure number of threads in state change server instead of
wsgi_default_pool_size.

[1] https://review.openstack.org/#/c/278007/

DocImpact: Add new config option -
ha_keepalived_state_change_server_threads, to configure number
of threads in state change server.

Closes-Bug: #1581580
Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993
(cherry picked from commit 70ea188f5d87c45fb60ace8b8405274e5f6dd489)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631375

Title:
  New option for num_threads for state change server

Status in neutron:
  New

Bug description:
  https://review.openstack.org/379580
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 039ab16f4ea9308792d4168adb1d501310145346
  Author: venkata anil 
  Date:   Tue May 17 16:30:13 2016 +

  New option for num_threads for state change server
  
  Currently max number of client connections(i.e greenlets spawned at
  a time) opened at any time by the WSGI server is set to 100 with
  wsgi_default_pool_size[1].
  
  This configuration may be fine for neutron api server. But with
  wsgi_default_pool_size(=100) requests, state change server
  is creating heavy cpu load on agent.
  So this server(which run on agents) need lesser value i.e
  can be configured to half the number of cpu on agent
  
  We use "ha_keepalived_state_change_server_threads" config option
  to configure number of threads in state change server instead of
  wsgi_default_pool_size.
  
  [1] https://review.openstack.org/#/c/278007/
  
  DocImpact: Add new config option -
  ha_keepalived_state_change_server_threads, to configure number
  of threads in state change server.
  
  Closes-Bug: #1581580
  Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993
  (cherry picked from commit 70ea188f5d87c45fb60ace8b8405274e5f6dd489)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1631375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631371] [NEW] [RFE] Expose trunk details over metadata API

2016-10-07 Thread Bence Romsics
Public bug reported:

Enable bringup of subports via exposing trunk/subport details over
the metadata API

With the completion of the trunk port feature in Newton (Neutron
bp/vlan-aware-vms [1]), trunk and subports are now available. But the
bringup of the subports' VLAN interfaces inside an instance is not
automatic. In Newton there's no easy way to pass information about
the subports to the guest operating system. But using the metadata
API we can change this.

Problem Description
---

To bring up (and/or tear down) a subport the guest OS

(a) must know the segmentation-type and segmentation-id of a subport
as set in 'openstack network trunk create/set --subport'

(b) must know the MAC address of a subport
as set in 'openstack port create'

(c) must know which vNIC the subport belongs to

(d) may need to know when were subports added or removed
(if they are added or removed during the lifetime of an instance)

Since subports do not have a corresponding vNIC, the approach used
for regular ports (with a vNIC) cannot work.

This write-up addresses problems (a), (b) and (c), but not (d).

Proposed Change
---

Here we propose a change involving both Nova and Neutron to expose
the information needed via the metadata API.

Information covering (a) and (b) is already available (read-only)
in the 'trunk_details' attribute of the trunk parent port (ie. the
port which the instance was booted with). [2]

We propose to use the MAC address of the trunk parent port to cover
(c). We recognize this may occasionally be problematic, because MAC
addresses (of ports belonging to different neutron networks) are not
guaranteed to be unique, therefore collision may happen. But this seems
to be a small price for avoiding the complexity of other solutions.

The mechanism would be the following. Let's suppose we have port0
which is a trunk parent port and instance0 was booted with '--nic
port-id=port0'. On every update of port0's trunk_details Neutron
constructs the following JSON structure:

PORT0-DETAILS = {
"mac_address": PORT0-MAC-ADDRESS, "trunk_details":
PORT0-TRUNK-DETAILS
}

Then Neutron sets a metadata key-value pair of instance0, equivalent
to the following nova command:

nova meta set instance0 trunk_details::PORT0-MAC-ADDRESS=PORT0-DETAILS

Nova in Newton limits meta values to <= 255 characters, this limit
must be raised. Assuming the current format of trunk_details roughly
150 characters/subport are needed. Alternatively meta values could
have unlimited length - at least for the service tenant used by
Neutron. (Though tenant-specific API validators may not be a good
idea.) The 'values' column of the the 'instance_metadata' table should
be altered from VARCHAR(255) to TEXT() in a Nova DB migration.
(A slightly related bug report: [3])

A program could read
http://169.254.169.254/openstack/2016-06-30/meta_data.json and
bring up the subport VLAN interfaces accordingly. This program is
not covered here, however it is worth pointing out that it could be
called by cloud-init.

Alternatives


(1) The MAC address of a parent port can be reused for all its child
ports (when creating the child ports). Then VLAN subinterfaces
of a network interface will have the correct MAC address by
default. Segmentation type and ID can be shared in other ways, for
example as a VLAN plan embedded into a golden image. This approach
could even partially solve problem (d), however it cannot solve problem
(a) in the dynamic case. Use of this approach is currently blocked
by an openvswitch firewall driver bug. [4][5]

(2) Generate and inject a subport bringup script into the instance
as user data. Cannot handle subports added or removed after VM boot.

(3) An alternative solution to problem (c) could rely on the
preservation of ordering between NICs passed to nova boot and NICs
inside an instance. However this would turn the update of trunk_details
into an instance-level operation instead of the port-level operation
proposed here. Plus it would fail if this ordering is ever lost.

References
--

[1] https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
[2] 
https://review.openstack.org/#q,Id23ce8fc16c6ea6a405cb8febf8470a5bf3bcb43,n,z
[3] https://bugs.launchpad.net/nova/+bug/1117923
[4] https://bugs.launchpad.net/neutron/+bug/1626010
[5] https://bugs.launchpad.net/neutron/+bug/1593760

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631371

Title:
  [RFE] Expose trunk details over metadata API

Status in neutron:
  New

Bug description:
  Enable bringup of subports via exposing trunk/subport details over
  the metadata API

  With the completion of the trunk port feature in Newton (Neutron
  bp/vlan-aware-vms [1]), trunk and subports are now available. But the
  bringup of the su

[Yahoo-eng-team] [Bug 1531013] Re: Duplicate entries in FDB table

2016-10-07 Thread Ihar Hrachyshka
Since it was fixed in kernel, I moved the bug to Won't Fix for neutron,
because no fix for Neutron is expected.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531013

Title:
  Duplicate entries in FDB table

Status in neutron:
  Won't Fix

Bug description:
  Posting here, because I'm not sure of a better place at the moment.

  Environment: Juno
  OS: Ubuntu 14.04 LTS
  Plugin: ML2/LinuxBridge

  root@infra01_neutron_agents_container-4c850328:~# bridge -V
  bridge utility, 0.0
  root@infra01_neutron_agents_container-4c850328:~# ip -V
  ip utility, iproute2-ss131122
  root@infra01_neutron_agents_container-4c850328:~# uname -a
  Linux infra01_neutron_agents_container-4c850328 3.13.0-46-generic #79-Ubuntu 
SMP Tue Mar 10 20:06:50 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

  We recently discovered that across the environment (5 controller, 50+
  compute) there are (tens of) thousands of duplicate entries in the FDB
  table, but only for the 00:00:00:00:00:00 broadcast entries. This is
  in an environment of ~1600 instances, ~4,100 ports, and 80 networks.

  In this example, the number of duplicate FDB entries for this
  particular VTEP jumps wildly:

  root@infra01_neutron_agents_container-4c850328:~# bridge fdb show | grep 
"00:00:00:00:00:00 dev vxlan-10 dst 172.29.243.157" | wc -l
  1429
  root@infra01_neutron_agents_container-4c850328:~# bridge fdb show | grep 
"00:00:00:00:00:00 dev vxlan-10 dst 172.29.243.157" | wc -l
  81057
  root@infra01_neutron_agents_container-4c850328:~# bridge fdb show | grep 
"00:00:00:00:00:00 dev vxlan-10 dst 172.29.243.157" | wc -l
  25806
  root@infra01_neutron_agents_container-4c850328:~# bridge fdb show | grep 
"00:00:00:00:00:00 dev vxlan-10 dst 172.29.243.157" | wc -l
  473141
  root@infra01_neutron_agents_container-4c850328:~# bridge fdb show | grep 
"00:00:00:00:00:00 dev vxlan-10 dst 172.29.243.157" | wc -l
  225472

  That behavior can be observed for all other VTEPs. We're seeing over
  13 million total FDB entries on this node:

  root@infra01_neutron_agents_container-4c850328:~# bridge fdb show >> 
james_fdb2.txt
  root@infra01_neutron_agents_container-4c850328:~# cat james_fdb2.txt | wc -l
  13554258

  We're also seeing the wild counts on compute nodes. These were run
  within 1 second of the previous completion:

  root@compute032:~# bridge fdb show | wc -l
  898981
  root@compute032:~# bridge fdb show | wc -l
  734916
  root@compute032:~# bridge fdb show | wc -l
  1483081
  root@compute032:~# bridge fdb show | wc -l
  508811
  root@compute032:~# bridge fdb show | wc -l
  2349221

  On this node, you can see over 28,000 duplicates for each of the
  entries:

  root@compute032:~# bridge fdb show | sort | uniq -c | sort -nr
28871 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.39 self permanent
28871 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.38 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.243.252 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.243.157 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.243.133 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.242.66 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.242.193 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.60 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.59 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.58 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.57 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.55 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.54 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.53 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.51 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.50 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.49 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.48 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.47 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.46 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.45 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.44 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.43 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.42 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.40 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.37 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.36 self permanent
28870 00:00:00:00:00:00 dev vxlan-15 dst 172.29.240.35 self permanent
28870 00:00:00:00:00:00 dev vxlan

[Yahoo-eng-team] [Bug 1626490] Re: placement api root resource is only '/' not '' which can lead to unexpected 404s

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/374870
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f21e7e33298a359f54e4eb5c7089a6a4f13537ee
Submitter: Jenkins
Branch:master

commit f21e7e33298a359f54e4eb5c7089a6a4f13537ee
Author: Chris Dent 
Date:   Thu Sep 22 14:40:45 2016 +

[placement] Allow both /placement and /placement/ to work

When mounted under a prefix (such as /placement) the service was
only returning the home document at /placement/ not /placement. In
the context of having a prefix, we'd generally like the latter to
work and for '/' to work when there is no prefix. This allows both.

Note that this doesn't make /placement/resource_providers/ work, and
we don't want that.

Change-Id: I0ac92bf9982227d5f4915175182e5230aeb039b4
Closes-Bug: #1626490


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1626490

Title:
  placement api root resource is only '/' not '' which can lead to
  unexpected 404s

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The placement api handler for root is defined as matching '/'. This
  works fine in those situations where the requested URL actually
  includes the '/' but in some situations it can be easy to not include
  it. If, for example, the API is mounted under a prefix like
  '/placement' (as in the default devstack setup) then it is possible to
  make a request to http://example.com/placement and get a 404 whereas
  http://example.com/placement/ will get the expected microversion
  information.

  This ought to be easy to fix and was basically an oversight.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1626490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1626493] Re: placement api log entries have mismatched request ids

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/374833
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0b9b8981f2802f21999325e028605157f0b1f11d
Submitter: Jenkins
Branch:master

commit 0b9b8981f2802f21999325e028605157f0b1f11d
Author: Chris Dent 
Date:   Thu Sep 22 13:47:49 2016 +

[placement] reorder middleware to correct logging context

The initial bug was that the initial 'Starting' log provided by
requestlog had a different request id from the rest of the log
messages for the same request. The initial assumption was that this
was because a request id was not initially available, causing one
to be generated for the first log entry that later was replaced
by the request id middleware.

In the process of debugging that it became clear that the id was
in fact the request id of the previous request because the context
was being reused under the covers in oslo_log and olso_context.

Therefore the auth, context and request id middlewares are now
changed to be active in the middleware stack before the request log
middleware. The unfortunate side effect of this is that the Starting
message and final request logging is no longer actually bounding the
full request: it misses three critical middlewares.

Change-Id: Ifa412973037193e4e67a0c9d2c71c7a4847980a9
Closes-Bug: #1626493


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1626493

Title:
  placement api log entries have mismatched request ids

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The placement (v 1.0) API's service logs get confused about request
  id. The following is one request:

  ```
  2016-09-22 10:54:54.372 22919 DEBUG nova.api.openstack.placement.requestlog 
[req-93669abf-57c5-4415-a734-1affc816a9ae admin admin] Starting request: 
10.0.2.15 "GET /placement/resource_providers/fastidious" __call__ 
/opt/stack/nova/nova/api/openstack/placement/requestlog.py:36
  2016-09-22 10:54:54.414 22919 INFO nova.api.openstack.placement.requestlog 
[req-7312f440-2c66-4483-a514-eaf3602b50e6 admin admin] 10.0.2.15 "GET 
/placement/resource_providers/fastidious" status: 404 len: 80
  ```

  This is probably because the request id middleware is not being called
  first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1626493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631347] [NEW] need add context param to nova's confirm_migration function

2016-10-07 Thread jichenjc
Public bug reported:

a few nova virt calls has 'context' param send from compute layer to virt layer
 but confirm_migration don't have it , sometimes, we might need store some info 
in 
hypervisor layers, so add context param will help further work

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1631347

Title:
  need add context param to nova's confirm_migration function

Status in OpenStack Compute (nova):
  New

Bug description:
  a few nova virt calls has 'context' param send from compute layer to virt 
layer
   but confirm_migration don't have it , sometimes, we might need store some 
info in 
  hypervisor layers, so add context param will help further work

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1631347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1626496] Re: when placement API sends a 405 the header value is in the incorrect format

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/374800
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5fdb9226d269af069bfcbd42cff401c9d089b7dd
Submitter: Jenkins
Branch:master

commit 5fdb9226d269af069bfcbd42cff401c9d089b7dd
Author: Chris Dent 
Date:   Thu Sep 22 12:58:41 2016 +

[placement] ensure that allow headers are native strings

mod-wsgi checks that response header values are what's described
as "native strings". This means whatever `str` is in either
python 2 or 3, but never `unicode`. When they are not mod-wsgi
will 500. For the most part this is taken care of by webob, but in
the case of the 405 handling, the webob response is not being
fully massaged.

mod-wsgi is doing this because it supposed to. Python WSGI server
gateways have different expectations of headers depending on whether
the Python is 2 or 3. See

https://www.python.org/dev/peps/pep-/#a-note-on-string-types

In addition to the unit test, the gabbi tests are now using a
version of wsgi-intercept that will raise a TypeError when the
application response headers are not using the correct form. This
check needs to be done in wsgi-intercept rather than the gabbi tests
because both wsgi-intercept and the http client makes requests
transform the headers for their own purposes.

This fix ensures that instead of a 500 the correct 405 response
happens.

Closes-Bug: #1626496
Depends-On: I3b8aabda929fe39b60e645abb6fabb9769554829
Change-Id: Ifa436e11e79adc2e159b4c5e7d3623d9a792b5f7


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1626496

Title:
  when placement API sends a 405 the header value is in the incorrect
  format

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When the placement handlers raise an HTTPMethodNotAllowed response,
  the headers are set with those methods that are allowed. These need to
  be not unicode (it's not clear how they get to be unicode (in 2.7) in
  the first place, but something is doing it, and that's not right)
  otherwise we get:

  ```
  2016-09-22 11:03:01.875 22919 ERROR nova.api.openstack.placement.handler 
HTTPMethodNotAllowed: The method specified is not allowed for this resource.
  2016-09-22 11:03:01.875 22919 ERROR nova.api.openstack.placement.handler 
  2016-09-22 11:03:01.877 22919 INFO nova.api.openstack.placement.requestlog 
[req-524fdd42-0f19-4eb3-827f-99ae22fc6dd9 admin admin] 10.0.2.15 "DELETE 
/placement/resource_providers" status: 405 len: 133
  mod_wsgi (pid=22919): Exception occurred processing WSGI script 
'/usr/local/bin/nova-placement-api'.
  TypeError: expected byte string object for header value, value of type 
unicode found
  ```

  wherein the service correctly tries to send a 405 but then the
  mod_wsgi server blows up on the data it is getting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1626496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630410] Re: fixed_ips list out of order

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/382121
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=bd1c454c4f6de69eec7b3814b90faeb4db371ba6
Submitter: Jenkins
Branch:master

commit bd1c454c4f6de69eec7b3814b90faeb4db371ba6
Author: Kevin Benton 
Date:   Tue Oct 4 18:38:07 2016 -0600

Deterministic ordering of fixed_ips

This adds an order_by clause to the fixed_ips relationship
on the port object to ensure that the fixed_ip ordering is
consistent between a create, an update, and a get request
for a port. Without it we were at the mercy of the sql backend
to determine how it felt like ordering them on the join condition.

Closes-Bug: #1630410
Change-Id: I523e0ab6e376f5ff6205b1cc1748aa6d546919cb


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630410

Title:
  fixed_ips list out of order

Status in neutron:
  Fix Released

Bug description:
  Change [1], led to failures like [2], in that the order of fixed_ips
  is no longer preserved between POST and GET requests. This was taken
  care for some other attributes of the Port resource like allowed
  address pairs, but not all.

  Even though the API is lax about the order of specific attributes, we
  should attempt to restore the old behavior to avoid more damaging side
  effects in clients that are assuming the list be returned in the order
  in which fixed IPs are created.

  [1] https://review.openstack.org/#/c/373582
  [2] 
http://logs.openstack.org/63/377163/4/check/gate-shade-dsvm-functional-neutron/e621e3d/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555065] Re: Image goes to saving state when we delete instance just after taking snapshot and remain the state forever

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/294513
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d8e695cb900ad71996e6cf970894bc2e3f2df8b4
Submitter: Jenkins
Branch:master

commit d8e695cb900ad71996e6cf970894bc2e3f2df8b4
Author: Prateek Arora 
Date:   Fri Mar 18 06:46:43 2016 -0400

Delete traces of in-progress snapshot on VM being deleted

When user tries to create snapshot of instance and at the same time
if another request tries to delete the instance, at that time image
goes in saving status for forever because of race  between instance
delete and snapshot requests.

Caught exceptions(InstanceNotFound and UnexpectedDeletingTaskStateError)
in except block and deleting the image which got stuck in saving state.

Closes-Bug: #1555065
Change-Id: If0b918dc951030e6b6ffba147443225e0e4a370a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1555065

Title:
  Image goes to saving state when we delete instance just after taking
  snapshot and remain the state forever

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Steps to reproduce

  1) nova image-create test test-img & nova delete test

  where test is the name of the instance

  I get the following message

  [stack@localhost compute]$ nova image-create test test-img & nova delete test
  [1] 2506
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) 
(Request-ID: req-f699c746-0b29-4d84-b258-c8233d5b7a42)
  Request to delete server test has been accepted.
  [1]+  Exit 1  nova image-create test test-img

  In the nova api logs i can see the following stacktrace

  'c59f52ce-b51f-433e-81b0-099e3b65b0a3', 
'89ba4861-99e2-4376-8015-3f5ac02538d0']^[[00m ^[[00;33mfrom (pid=12399) reserve 
/opt/stack/nova/nova/quota.py:1345^[[00m
  2016-03-09 02:04:25.304 ^[[01;31mERROR nova.api.openstack.extensions 
[^[[01;36mreq-f699c746-0b29-4d84-b258-c8233d5b7a42 ^[[00;36madmin 
admin^[[01;31m] ^[[01;35m^[[01;31mUnexpected exception in API method^[[00m
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
478, in wrapped
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/common.py", line 391, 
in inner
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/validation/__init__.py", line 
73, in wrapper
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn func(*args, **kwargs)
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/validation/__init__.py", line 
73, in wrapper
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn func(*args, **kwargs)
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/compute/servers.py", 
line 1113, in _action_create_image
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mextra_properties=metadata)
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/compute/api.py", line 169, in 
wrapped
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn func(self, context, target, *args, **kwargs)
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/compute/api.py", line 186, in 
_wrapped
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn fn(self, context, instance, *args, **kwargs)
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/compute/api.py", line 139, in inner
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(self, context, instance, *args, **kw)
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/compute/api.py", line 2238, in 
snapshot
  ^[[01;31m2016-03-09 02:04:25.304 TRACE nova.api.openstack.extensions 
^[[0

[Yahoo-eng-team] [Bug 1628892] Re: Admin network panel throws something went wrong page

2016-10-07 Thread Rob Cresswell
This was fixed in Newton by https://review.openstack.org/#/c/325670, so
you can just backport it cleanly.

** Changed in: horizon
Milestone: next => None

** Changed in: horizon
   Status: New => Fix Released

** Tags added: mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1628892

Title:
  Admin network panel throws something went wrong page

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When neutron service is down, if user tries to visit the networks
  panel in the admin dashboard it will give some thing went wrong page.

  How ever the network panel in project view will throw a pop-up saying
  unable to retrieve networks but still shows the page with out giving
  some thing went wrong page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1628892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554226] Re: Clean up warnings about enginefacade

2016-10-07 Thread Markus Zoeller (markus_z)
As we use the "direct-release" model in Nova we don't use the
"Fix Comitted" status for merged bug fixes anymore. I'm setting
this manually to "Fix Released" to be consistent.

[1] "[openstack-dev] [release][all] bugs will now close automatically
when patches merge"; Doug Hellmann; 2015-12-07;
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081612.html

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554226

Title:
  Clean up warnings about enginefacade

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  You see a bunch of the following in Nova's test runs:

  Captured stderr:
  
  
/home/jaypipes/repos/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:254:
 OsloDBDeprecationWarning: EngineFacade is deprecated; please use 
oslo_db.sqlalchemy.enginefacade
self._legacy_facade = LegacyEngineFacade(None, _factory=self)

  We should use oslo_db.sqlalchemy.enginefacade now in all cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586309] Re: Delete a instance after this instance resized failed, source resource is not cleared.

2016-10-07 Thread Markus Zoeller (markus_z)
As we use the "direct-release" model in Nova we don't use the
"Fix Comitted" status for merged bug fixes anymore. I'm setting
this manually to "Fix Released" to be consistent.

[1] "[openstack-dev] [release][all] bugs will now close automatically
when patches merge"; Doug Hellmann; 2015-12-07;
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081612.html

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1586309

Title:
  Delete a instance after this instance resized failed, source resource
  is not cleared.

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Environment
  ===
  stable/mitaka

  Steps to reproduce
  ==
  * I did boot a instance in compute node of SBCJSlot5Rack2Centos7, instance 
uuid was 00bc72d0-0778-4e69-bfee-b58b87dd1532.
  * Then I did resize this instance, resize failed on finish_resize function on 
destination compute node SBCJSlot3Rack2Centos7.

  [stack@SBCJSlot5Rack2Centos7 ~]$ openstack server show 
00bc72d0-0778-4e69-bfee-b58b87dd1532.
  
+--++
  | Field| Value

  |
  
+--++
  | OS-DCF:diskConfig| AUTO 

  |
  | OS-EXT-AZ:availability_zone  | nova 

  |
  | OS-EXT-SRV-ATTR:host | SBCJSlot3Rack2Centos7

  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | SBCJSlot3Rack2Centos7

  |
  | OS-EXT-SRV-ATTR:instance_name| instance-0014

  |
  | OS-EXT-STS:power_state   | 1

  |
  | OS-EXT-STS:task_state| None 

  |
  | OS-EXT-STS:vm_state  | error

  |
  | OS-SRV-USG:launched_at   | 2016-05-27T02:28:07.00   

  |
  | OS-SRV-USG:terminated_at | None 

  |
  | accessIPv4   |  

  |
  | accessIPv6   |  

  |
  | addresses| public=2001:db8::6, 10.43.239.76 

  |
  | config_drive | True 

  |
  | created  | 2016-05-27T02:27:56Z 

  |
  | fault| {u'message': u'Unexpected 
vif_type=binding_failed', u'code': 500, u'details': u'  File 
"/opt/stack/nova/nova/compute/manager.py", line |
  |  | 375, in decorated_function\n
return function(self, context, *args, **kwargs)\n  File 
"/opt/stack/nova/nova/compute/manager.py", |
  |  | line 4054, in finish_resize\n
self._set_instance_obj_error_state(context, instan

[Yahoo-eng-team] [Bug 1589502] Re: Request Mitaka release for networking-bagpipe

2016-10-07 Thread Thomas Morin
** Changed in: networking-bagpipe
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1589502

Title:
  Request Mitaka release for networking-bagpipe

Status in BaGPipe:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Can you please do a release of networking-bagpipe from master branch ?

  Commit: 870d281eeb707fbb6c4de431d764cebb586f872e
  Version: 4.0.0  (first release, but number chosen to be in sync with 
networking-bgpvpn)

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-bagpipe/+bug/1589502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628135] Re: Integrate Identity back end with LDAP in Administrator Guide

2016-10-07 Thread Steve Martinelli
** Changed in: keystone
   Status: Triaged => Invalid

** Changed in: keystone
Milestone: ocata-1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1628135

Title:
  Integrate Identity back end with LDAP in Administrator Guide

Status in OpenStack Identity (keystone):
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  The guide states that both "keystone.identity.backends.ldap.Identity"
  and "user_attribute_ignore" can be used, but as you can see below this
  is deprecated (and infact not working in current Newton)

  
  ==> keystone-apache-admin-error.log <==
  2016-09-27 10:35:13.891436 2016-09-27 10:35:13.890 24 WARNING stevedore.named 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Could not load 
keystone.identity.backends.ldap.Identity

  ==> keystone.log <==
  2016-09-27 10:35:13.914 24 WARNING oslo_config.cfg 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Option 
"user_attribute_ignore" from group "ldap" is deprecated for removal.  Its value 
may be silently ignored in the future.

  ==> keystone-apache-admin-error.log <==
  2016-09-27 10:35:13.914683 2016-09-27 10:35:13.914 24 WARNING oslo_config.cfg 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Option 
"user_attribute_ignore" from group "ldap" is deprecated for removal.  Its value 
may be silently ignored in the future.

  
  ---
  Release: 0.9 on 2016-09-27 12:00
  SHA: 974a8b3e88ffdda8b621a6befc124d4f9ca9bdc7
  Source: 
http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/admin-guide/source/keystone-integrate-identity-backend-ldap.rst
  URL: 
http://docs.openstack.org/admin-guide/keystone-integrate-identity-backend-ldap.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1628135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622644] Re: OVS agent ryu/native implementation breaks non-OF1.3 uses

2016-10-07 Thread Thomas Morin
** Changed in: networking-bagpipe
   Status: New => Fix Released

** Changed in: networking-bagpipe
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622644

Title:
  OVS agent ryu/native implementation breaks non-OF1.3 uses

Status in networking-bgpvpn:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  The ryu-based OVS agent variant forces the bridge Openflow version to 1.3 
only [1], which breaks a few things:
  - troubleshooting tools relying on ovs-ofctl, unless they specify "-O 
Openflow13", will break:
version negotiation failed (we support version 0x01, peer supports version 
0x04)
ovs-ofctl: br-tun: failed to connect to socket (Broken pipe)

  - calling add_flow on an OVSCookieBridge derived from a bridge that is
  an native.ovs_bridge.OVSAgentBridge, will fail with the same error,
  because add_flow will call ovs-ofctl without specifying "-O
  Openflow13"  (this issue is currently hitting networking-bgpvpn: [2])

  It seems like a possible fix would be to not restrict the set of
  Openflow versions supported by the bridge to Openflow13, but to just
  *add* Openflow13 to the set of supported versions.

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py#L78
  [2] 
https://github.com/openstack/networking-bagpipe/blob/master/networking_bagpipe/agent/bagpipe_bgp_agent.py#L512

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1622644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628135] Re: Integrate Identity back end with LDAP in Administrator Guide

2016-10-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/382634
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=5674db0f24f83f98724c60ec3433baa35d3c05e8
Submitter: Jenkins
Branch:master

commit 5674db0f24f83f98724c60ec3433baa35d3c05e8
Author: Eric Brown 
Date:   Wed Oct 5 12:29:51 2016 -0700

Update identity section of admin guide

A number of configuration options stated in the identity admin
guides are deprecated or since removed. This updates to the latest
config and fixes the referenced bug.

The keystone-integrate-assignment-backend-ldap was also removed
since this is no longer support by keystone.

Change-Id: I854b2d1d9e1ad8ea88a6e30a15905e332663c7b3
Closes-Bug: #1628135


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1628135

Title:
  Integrate Identity back end with LDAP in Administrator Guide

Status in OpenStack Identity (keystone):
  Triaged
Status in openstack-manuals:
  Fix Released

Bug description:
  The guide states that both "keystone.identity.backends.ldap.Identity"
  and "user_attribute_ignore" can be used, but as you can see below this
  is deprecated (and infact not working in current Newton)

  
  ==> keystone-apache-admin-error.log <==
  2016-09-27 10:35:13.891436 2016-09-27 10:35:13.890 24 WARNING stevedore.named 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Could not load 
keystone.identity.backends.ldap.Identity

  ==> keystone.log <==
  2016-09-27 10:35:13.914 24 WARNING oslo_config.cfg 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Option 
"user_attribute_ignore" from group "ldap" is deprecated for removal.  Its value 
may be silently ignored in the future.

  ==> keystone-apache-admin-error.log <==
  2016-09-27 10:35:13.914683 2016-09-27 10:35:13.914 24 WARNING oslo_config.cfg 
[req-01e2b673-51b3-4364-a131-ad0bb8c78e01 - - - - -] Option 
"user_attribute_ignore" from group "ldap" is deprecated for removal.  Its value 
may be silently ignored in the future.

  
  ---
  Release: 0.9 on 2016-09-27 12:00
  SHA: 974a8b3e88ffdda8b621a6befc124d4f9ca9bdc7
  Source: 
http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/admin-guide/source/keystone-integrate-identity-backend-ldap.rst
  URL: 
http://docs.openstack.org/admin-guide/keystone-integrate-identity-backend-ldap.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1628135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631292] Re: networking-bagpipe stable/newton branch creation

2016-10-07 Thread Thomas Morin
** Description changed:

- We would like to fork the stable/newton of networking-bagpipe from
- commit 4b6673268664f3ca92e25ce7bd4fa6ad91ae5e40 .
+ We would like to fork the stable/newton branch of networking-bagpipe
+ from commit 4b6673268664f3ca92e25ce7bd4fa6ad91ae5e40 .
  
  Thanks!

** Also affects: networking-bagpipe
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631292

Title:
  networking-bagpipe stable/newton branch creation

Status in BaGPipe:
  New
Status in neutron:
  New

Bug description:
  We would like to fork the stable/newton branch of networking-bagpipe
  from commit 4b6673268664f3ca92e25ce7bd4fa6ad91ae5e40 .

  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-bagpipe/+bug/1631292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631292] [NEW] networking-bagpipe stable/newton branch creation

2016-10-07 Thread Thomas Morin
Public bug reported:

We would like to fork the stable/newton branch of networking-bagpipe
from commit 4b6673268664f3ca92e25ce7bd4fa6ad91ae5e40 .

Thanks!

** Affects: networking-bagpipe
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631292

Title:
  networking-bagpipe stable/newton branch creation

Status in BaGPipe:
  New
Status in neutron:
  New

Bug description:
  We would like to fork the stable/newton branch of networking-bagpipe
  from commit 4b6673268664f3ca92e25ce7bd4fa6ad91ae5e40 .

  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-bagpipe/+bug/1631292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp