[Yahoo-eng-team] [Bug 1580437] Re: [RFE]: neutron purge- shared network without ports in non owner tenant

2016-09-06 Thread Reedip
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580437

Title:
  [RFE]: neutron purge- shared network without ports in non owner tenant

Status in python-neutronclient:
  New

Bug description:
  When we execute "neutron purge tenantID" on a tenant who has shared
  its network, and that network is in use by another tenant, the network
  will not be deleted.

  When we execute "neutron purge tenantID" on a tenant who has shared
  its network, and that network is NOT in use by another tenant (seen by
  it but no ports attached to VMs nor routers), the network will  be
  deleted without any warning message to the user.

  We believe that there should be a message verifying the action:

  Something like: "This network  may be used by tenants , are you sure you want to delete this network?

  
  BR 
  Alex

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1580437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607381] Re: HA router in l3 dvr_snat/legacy agent has no ha_port

2016-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/365653
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=29cec0345617627b64a73b9de35c46bccdc4ffa3
Submitter: Jenkins
Branch:master

commit 29cec0345617627b64a73b9de35c46bccdc4ffa3
Author: John Schwarz 
Date:   Mon Sep 5 16:34:44 2016 +0300

l3 ha: don't send routers without '_ha_interface'

Change I22ff5a5a74527366da8f82982232d4e70e455570 changed
get_ha_sync_data_for_host such that if an agent requests a router's
details, then it is always returned, even when it doesn't have the key
'_ha_interface'. Further changes to this change tried to put this check
back in (Ie38baf061d678fc5d768195b25241efbad74e42f), but this patch
failed to do so for the case where no bindings were returned (possible
when the router has been concurrently deleted). This patch puts this
check back in.

Closes-Bug: #1607381
Change-Id: I047e53ea9b3e20a21051f29d0a44624e2a31c83c


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607381

Title:
  HA router in l3 dvr_snat/legacy agent has no ha_port

Status in neutron:
  Fix Released

Bug description:
  This is a successor to
  https://bugs.launchpad.net/neutron/+bug/1533441.

  HA router can not be deleted in L3 agent after race between HA router
  creating and deleting

  Exception:
  1. Unable to process HA router %s without HA port (HA router initialize)
  2. AttributeError: 'NoneType' object has no attribute 'config' (HA router 
deleting procedure)

  http://paste.openstack.org/show/523757/

  the absent of ha_port may also cause infinite loop trace, which is now have a 
new LP bug, https://bugs.launchpad.net/neutron/+bug/1606844:
  http://paste.openstack.org/show/528407/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1607381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618319] Re: Invalid service catalog service: network error

2016-09-06 Thread muralidharan
This is issue with the configuration, There are two services were
created for neutron , That is the reason I was getting this error, Now I
can be able to see that it is working fine after deleting the second
service which is created previously.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1618319

Title:
  Invalid service catalog service: network error

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When I am trying to access the dashboard from horizon it is working
  fine for the keystone, glance etc.,

  But Incase of Neutron I am always getting error.

  Tried accessing the network section from both the admin and project
  tab. Both return the error as follows:

  [Tue Aug 30 05:45:23.039423 2016] [:error] [pid 3524:tid 140344693204736] 
Login successful for user "admin".
  [Tue Aug 30 05:45:23.060751 2016] [:error] [pid 3525:tid 140344693204736] 
Internal Server Error: /horizon/admin/networks/
  [Tue Aug 30 05:45:23.060783 2016] [:error] [pid 3525:tid 140344693204736] 
Traceback (most recent call last):
  [Tue Aug 30 05:45:23.060786 2016] [:error] [pid 3525:tid 140344693204736]   
File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 132, 
in get_response
  [Tue Aug 30 05:45:23.060789 2016] [:error] [pid 3525:tid 140344693204736] 
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  [Tue Aug 30 05:45:23.060791 2016] [:error] [pid 3525:tid 140344693204736]   
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../horizon/decorators.py",
 line 36, in dec
  [Tue Aug 30 05:45:23.060793 2016] [:error] [pid 3525:tid 140344693204736] 
return view_func(request, *args, **kwargs)
  [Tue Aug 30 05:45:23.060795 2016] [:error] [pid 3525:tid 140344693204736]   
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../horizon/decorators.py",
 line 84, in dec
  [Tue Aug 30 05:45:23.060797 2016] [:error] [pid 3525:tid 140344693204736] 
return view_func(request, *args, **kwargs)
  [Tue Aug 30 05:45:23.060799 2016] [:error] [pid 3525:tid 140344693204736]   
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../horizon/decorators.py",
 line 52, in dec
  [Tue Aug 30 05:45:23.060801 2016] [:error] [pid 3525:tid 140344693204736] 
return view_func(request, *args, **kwargs)
  [Tue Aug 30 05:45:23.060803 2016] [:error] [pid 3525:tid 140344693204736]   
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../horizon/decorators.py",
 line 36, in dec
  [Tue Aug 30 05:45:23.060805 2016] [:error] [pid 3525:tid 140344693204736] 
return view_func(request, *args, **kwargs)
  [Tue Aug 30 05:45:23.060807 2016] [:error] [pid 3525:tid 140344693204736]   
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../horizon/decorators.py",
 line 84, in dec
  [Tue Aug 30 05:45:23.060809 2016] [:error] [pid 3525:tid 140344693204736] 
return view_func(request, *args, **kwargs)
  [Tue Aug 30 05:45:23.060821 2016] [:error] [pid 3525:tid 140344693204736]   
File "/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 71, 
in view
  [Tue Aug 30 05:45:23.060823 2016] [:error] [pid 3525:tid 140344693204736] 
return self.dispatch(request, *args, **kwargs)
  [Tue Aug 30 05:45:23.060825 2016] [:error] [pid 3525:tid 140344693204736]   
File "/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 89, 
in dispatch
  [Tue Aug 30 05:45:23.060828 2016] [:error] [pid 3525:tid 140344693204736] 
return handler(request, *args, **kwargs)
  [Tue Aug 30 05:45:23.060830 2016] [:error] [pid 3525:tid 140344693204736]   
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../horizon/tables/views.py",
 line 159, in get
  [Tue Aug 30 05:45:23.060831 2016] [:error] [pid 3525:tid 140344693204736] 
handled = self.construct_tables()
  [Tue Aug 30 05:45:23.060833 2016] [:error] [pid 3525:tid 140344693204736]   
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../horizon/tables/views.py",
 line 142, in construct_tables
  [Tue Aug 30 05:45:23.060836 2016] [:error] [pid 3525:tid 140344693204736] 
tables = self.get_tables().values()
  [Tue Aug 30 05:45:23.060838 2016] [:error] [pid 3525:tid 140344693204736]   
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../horizon/tables/views.py",
 line 198, in get_tables
  [Tue Aug 30 05:45:23.060840 2016] [:error] [pid 3525:tid 140344693204736] 
self._tables[self.table_class._meta.name] = self.get_table()
  [Tue Aug 30 05:45:23.060843 2016] [:error] [pid 3525:tid 140344693204736]   
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../horizon/tables/views.py",
 line 209, in get_table
  [Tue Aug 30 05:45:23.060846 2016] [:error] [pid 3525:tid 140344693204736] 
self.table = 

[Yahoo-eng-team] [Bug 1620910] [NEW] Router is created successfully with spaces given as name

2016-09-06 Thread SREELAKSHMI PENTA
Public bug reported:

Steps to reproduce:
1. Go to Network -> Routers -> Create Router
2. Enter spaces as router name
3. Submit 

Existing result:
Router is created successfully

Expected result:
Validation should be thrown that router needs a name

** Affects: horizon
 Importance: Undecided
 Assignee: surekha (surekha23)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1620910

Title:
  Router is created successfully with spaces given as name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:
  1. Go to Network -> Routers -> Create Router
  2. Enter spaces as router name
  3. Submit 

  Existing result:
  Router is created successfully

  Expected result:
  Validation should be thrown that router needs a name

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1620910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1112634] Re: allow RPC version evolves independently for each of features

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1112634

Title:
  allow RPC version evolves independently for each of features

Status in neutron:
  Expired

Bug description:
  a call back which includes all of features is blocking RPC version
  evolves right.

  for example:
  plugin:
  class LinuxBridgeRpcCallbacks(dhcp_rpc_base.DhcpRpcCallbackMixin,
    l3_rpc_base.L3RpcCallbackMixin,
    sg_db_rpc.SecurityGroupServerRpcCallbackMixin):

  RPC_API_VERSION = '1.1'
  # Device names start with "tap"
  # history
  #   1.1 Support Security Group RPC

  agent:
  linux_bridge_agent:
  class LinuxBridgePluginApi(agent_rpc.PluginApi,
     sg_rpc.SecurityGroupServerRpcApiMixin):
  pass
  DHCP_agent
   DhcpPluginApi
  BASE_RPC_API_VERSION = '1.0'
  L3_agent:
  class L3PluginApi(proxy.RpcProxy):
    BASE_RPC_API_VERSION = '1.0'

  If I want to add a function in the L3RpcCallbackMixin:
  I have to bump the RCP_API_VERSION  to 1.2
  and L3PluginApi's version to 1.2 too.

  howerver, in ovs plugin:
  class OVSRpcCallbacks(dhcp_rpc_base.DhcpRpcCallbackMixin,
    l3_rpc_base.L3RpcCallbackMixin):

  # Set RPC API version to 1.0 by default.
  RPC_API_VERSION = '1.0'

  at this time I have  to bump RPC_API_VERSION into 1.2 too.
  all this 1.2 is due to the LinuxBridgeRpcCallbacks's RPC_API_VERSION version 
is 1.1 now.

  at the same time, we cannot use one big rpcproxy too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1112634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445830] Re: unit test in IpsetManagerTestCaseHashArgs fails

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445830

Title:
  unit test in IpsetManagerTestCaseHashArgs fails

Status in neutron:
  Expired

Bug description:
  Hi,

  When building Neutron Kilo RC1, there's a unique unit test failure
  (which is not bad, but it would be super nice to get that last one
  fixed for the final release). Below is the trace. Full build log
  available here:

  https://kilo-jessie.pkgs.mirantis.com/job/neutron/29/consoleFull

  If you wish to rebuild the package yourself in Jessie, here's the 
instructions:
  http://openstack.alioth.debian.org

  FAIL: 
neutron.tests.unit.agent.linux.test_ipset_manager.IpsetManagerTestCaseHashArgs.test_set_members_adding_more_than_5
  
neutron.tests.unit.agent.linux.test_ipset_manager.IpsetManagerTestCaseHashArgs.test_set_members_adding_more_than_5
  --
  _StringException: Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File 
"/��PKGBUILDDIR��/neutron/tests/unit/agent/linux/test_ipset_manager.py", line 
135, in test_set_members_adding_more_than_5
  self.verify_mock_calls()
File 
"/��PKGBUILDDIR��/neutron/tests/unit/agent/linux/test_ipset_manager.py", line 
43, in verify_mock_calls
  self.execute.assert_has_calls(self.expected_calls, any_order=False)
File "/usr/lib/python2.7/dist-packages/mock.py", line 872, in 
assert_has_calls
  'Actual: %r' % (calls, self.mock_calls)
  AssertionError: Calls not found.
  Expected: [call(['ipset', 'create', '-exist', 'IPv4fake_sgid', 'hash:ip', 
'family', 'inet', 'hashsize', '2048', 'maxelem', '131072'], run_as_root=True, 
process_input=None), call(['ipset', 'restore', '-exist'], run_as_root=True, 
process_input='create IPv4fake_sgid-new hash:ip family inet hashsize 2048 
maxelem 131072\nadd IPv4fake_sgid-new 10.0.0.1'), call(['ipset', 'swap', 
'IPv4fake_sgid-new', 'IPv4fake_sgid'], run_as_root=True, process_input=None), 
call(['ipset', 'destroy', 'IPv4fake_sgid-new'], run_as_root=True, 
process_input=None), call(['ipset', 'restore', '-exist'], run_as_root=True, 
process_input='create IPv4fake_sgid-new hash:ip family inet hashsize 2048 
maxelem 131072\nadd IPv4fake_sgid-new 10.0.0.1\nadd IPv4fake_sgid-new 
10.0.0.2\nadd IPv4fake_sgid-new 10.0.0.3'), call(['ipset', 'swap', 
'IPv4fake_sgid-new', 'IPv4fake_sgid'], run_as_root=True, process_input=None), 
call(['ipset', 'destroy', 'IPv4fake_sgid-new'], run_as_root=True, 
process_input=None)]
  Actual: [call(['ipset', 'create', '-exist', 'IPv4fake_sgid', 'hash:ip', 
'family', 'inet', 'hashsize', '2048', 'maxelem', '131072'], run_as_root=True, 
process_input=None),
   call(['ipset', 'restore', '-exist'], run_as_root=True, process_input='create 
IPv4fake_sgid-new hash:ip family inet hashsize 2048 maxelem 131072\nadd 
IPv4fake_sgid-new 10.0.0.1'),
   call(['ipset', 'swap', 'IPv4fake_sgid-new', 'IPv4fake_sgid'], 
run_as_root=True, process_input=None),
   call(['ipset', 'destroy', 'IPv4fake_sgid-new'], run_as_root=True, 
process_input=None),
   call(['ipset', 'add', '-exist', 'IPv4fake_sgid', '10.0.0.3'], 
run_as_root=True, process_input=None),
   call(['ipset', 'add', '-exist', 'IPv4fake_sgid', '10.0.0.2'], 
run_as_root=True, process_input=None)]

  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File 
"/��PKGBUILDDIR��/neutron/tests/unit/agent/linux/test_ipset_manager.py", line 
135, in test_set_members_adding_more_than_5
  self.verify_mock_calls()
File 
"/��PKGBUILDDIR��/neutron/tests/unit/agent/linux/test_ipset_manager.py", line 
43, in verify_mock_calls
  self.execute.assert_has_calls(self.expected_calls, any_order=False)
File "/usr/lib/python2.7/dist-packages/mock.py", line 872, in 
assert_has_calls
  'Actual: %r' % (calls, self.mock_calls)
  AssertionError: Calls not found.
  Expected: [call(['ipset', 'create', '-exist', 'IPv4fake_sgid', 'hash:ip', 
'family', 'inet', 'hashsize', '2048', 'maxelem', '131072'], run_as_root=True, 
process_input=None), call(['ipset', 'restore', '-exist'], run_as_root=True, 
process_input='create IPv4fake_sgid-new hash:ip family inet hashsize 2048 
maxelem 131072\nadd IPv4fake_sgid-new 10.0.0.1'), call(['ipset', 'swap', 
'IPv4fake_sgid-new', 'IPv4fake_sgid'], run_as_root=True, process_input=None), 
call(['ipset', 'destroy', 'IPv4fake_sgid-new'], run_as_root=True, 
process_input=None), call(['ipset', 'restore', '-exist'], run_as_root=True, 
process_input='create IPv4fake_sgid-new hash:ip 

[Yahoo-eng-team] [Bug 1444014] Re: StaleDataError on heat stack delete

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444014

Title:
  StaleDataError on heat stack delete

Status in neutron:
  Expired

Bug description:
  When deleting a heat stack with a VM on a subnet connected to external
  net via DVR router, sometimes following error occures:

  2015-04-10 11:01:43.178 11666 ERROR neutron.api.v2.resource 
[req-333b9ecb-6c49-4146-bb2f-0b0cc827120a ] delete failed
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, 
in resource
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 490, in 
delete
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 
1254, in delete_port
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource context, id)
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/neutron/db/l3_dvrscheduler_db.py", line 
202, in dvr_deletens_if_no_port
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource port_host)
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/neutron/db/agents_db.py", line 193, in 
_get_agent_by_type_and_host
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource Agent.host == 
host).one()
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2369, in one
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource ret = 
list(self)
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2411, in 
__iter__
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource 
self.session._autoflush()
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1198, in 
_autoflush
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource self.flush()
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1919, in 
flush
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource 
self._flush(objects)
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2037, in 
_flush
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource 
transaction.rollback(_capture_exception=True)
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in 
__exit__
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2001, in 
_flush
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource 
flush_context.execute()
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 372, in 
execute
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource 
rec.execute(self)
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 526, in 
execute
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource uow
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 60, in 
save_obj
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource mapper, 
table, update)
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 536, in 
_emit_update_statements
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource 
(table.description, len(update), rows))
  2015-04-10 11:01:43.178 11666 TRACE neutron.api.v2.resource StaleDataError: 
UPDATE statement on table 'ml2_dvr_port_bindings' expected to update 1 row(s); 
0 were matched.

  this prevents VM for deletion 

[Yahoo-eng-team] [Bug 1373266] Re: Fix spaces in log/exception messages and add period in help messages

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373266

Title:
  Fix spaces in log/exception messages and add period in help messages

Status in neutron:
  Expired

Bug description:
  As per standards mentioned in, 
  http://docs.openstack.org/developer/oslo.config/styleguide.html

  1. Fix spaces issue in log/exception messages.
 Add space at the end of line (message) rather than the space from start of 
next line.

  2. Add period at the end of each sentences in help messages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398988] Re: dhcp-agent / network bindings out of sync, stopping dhcp-agent

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398988

Title:
  dhcp-agent / network bindings out of sync, stopping dhcp-agent

Status in neutron:
  Expired

Bug description:
  dhcp-agent on a neutron host started dying with

  ---
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent Traceback (most 
recent call last):
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/amqp.py", line 
462, in _process_data
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent **args)
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 45, in dispatch
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
neutron_ctxt, version, method, namespace, **kwargs)
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent result = 
getattr(proxyobj, method)(ctxt, **kwargs)
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/site-packages/neutron/db/dhcp_rpc_base.py", line 92, in 
get_active_networks_info
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent networks = 
self._get_active_networks(context, **kwargs)
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/site-packages/neutron/db/dhcp_rpc_base.py", line 42, in 
_get_active_networks
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
plugin.auto_schedule_networks(context, host)
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/site-packages/neutron/db/agentschedulers_db.py", line 222, 
in auto_schedule_networks
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
self.network_scheduler.auto_schedule_networks(self, context, host)
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/site-packages/neutron/scheduler/dhcp_agent_scheduler.py", 
line 122, in auto_schedule_networks
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent context, 
[net_id], active=True)
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/site-packages/neutron/db/agentschedulers_db.py", line 126, 
in get_dhcp_agents_hosting_networks
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
binding.dhcp_agent)]
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/site-packages/neutron/db/agentschedulers_db.py", line 83, 
in is_eligible_agent
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
agent['heartbeat_timestamp'])
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent TypeError: 
'NoneType' object has no attribute '__getitem__'
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  2014-12-02 20:52:20.739 17097 TRACE neutron.agent.dhcp_agent 
  ---

  further investigation on the neutron server found that
  networkdhcpbindings seems to have got out of sync

  we can see just one dhcp-agent

  ---
  MariaDB [neutron]> select id from agents where agent_type="DHCP agent";
  +--+
  | id   |
  +--+
  | 6923675d-5616-4ffe-b2c4-4d130f67973f |
  +--+
  1 row in set (0.00 sec)
  ---

  but in the network bindings, at least 3 are listed

  
  MariaDB [neutron]> select DISTINCT(dhcp_agent_id) from 
networkdhcpagentbindings;
  +--+
  | dhcp_agent_id|
  +--+
  | 6923675d-5616-4ffe-b2c4-4d130f67973f |
  | b23f9f97-da04-4f61-bcfb-f8514e43cefd |
  | d3e3ac5b-9962-428a-a9f8-6b2a1aba48d8 |
  +--+
  3 rows in set (0.00 sec)
  ---

  

[Yahoo-eng-team] [Bug 1451690] Re: Max Retry Times work incorrectly in monitor of Load Blance

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451690

Title:
  Max Retry Times work incorrectly in monitor of Load Blance

Status in neutron:
  Expired

Bug description:
  1.Under test tenement,create network:net10,subnet:subnet10,network 
address:192.168.10.0/24 and other keep default
  2.Create router:R1,R1 inner interface relate to subnet10 and set outer 
network for R1
Create VM1-1(port 80 is deny by iptables),choose subnet10,security group 
choose default,image is centos
Create VM1-2(port 80 is deny by iptables),choose subnet10,security group 
choose default,image is centos
  3.Create Resource Pool name "FZ1",choose subnet10,protocol is http,Load 
Blance Mode is ROUND_ROBIN
Add VIP for"FZ1",set assign ip to 192.168.10.16,protocol port is 80 and 
other keep as default
  4.Add member VM1-1(192.168.10.5) and VM1-2(192.168.10.6) for"FZ1",Weight is 
1,protocol port is 80
  5.Add "TCP" monitor(delay is 3,timeout is21,max retry times is 8)which is 
relate to FZ1
  6.Capture in VM1-1 and VM1-2,find max retry times is more than 8

  you can check in attachment

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356926] Re: ipv6 subnet reserves 2 addresses in allocation pool

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1356926

Title:
  ipv6 subnet reserves 2 addresses in allocation pool

Status in neutron:
  Expired

Bug description:
  consider this table:

  125::/125  | {"start": "125::1", "end": "125::6"}  |
  126::/126  | {"start": "126::1", "end": "126::2"}  |
  127::/127  |  
  |
  128::/128  |  
  |

  all those  subnets created with 
  neutron subnet-create --ip_version 6 --disable-dhcp --no-gateway NETWORK CIDR

  You see that ::0 and the largest address in CIDR are reserved.
  This is similar to ipv4 where we have network address and broadcast address, 
while
  according to 
http://www.iana.org/assignments/ipv6-interface-ids/ipv6-interface-ids.xhtml 
there should be no this reservation for ipv6.
  Situation for prefixes 127 and 128 even worse, no addresses at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1356926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451630] Re: The "PING" monitor in load balance use TCP protocol not ICMP

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451630

Title:
  The "PING" monitor in load balance use TCP protocol not ICMP

Status in neutron:
  Expired

Bug description:
  1.Under test tenement,create network:net1,subnet:subnet1,network 
address:192.168.1.0/24 and other keep default
  2.Create router:R1,R1 inner interface relate to subnet1 and set outer network 
for R1
    Create VM1-1(port 80 is deny by iptables),choose subnet1,security group 
choose default,image is centos
    Create VM1-2(port 80 is deny by iptables),choose subnet1,security group 
choose default,image is centos
  3.Create Resource Pool name "FZ1",choose subnet1,protocol is http,Load Blance 
Mode is ROUND_ROBIN
    Add VIP for"FZ1",set assign ip to 192.168.1.16,protocol port is 80 and 
other keep as default
  4.Add member VM1-1 and VM1-2 for"FZ1",Weight is 1,protocol port is 80
  5.Add "PING" monitor(delay is 1,timeout is 1,max retry times is 1)which is 
relate to FZ1
  ->the status of VM1-1 and VM1-2 is inactive should be active
  Capture in VM1-1 and VM1-2 interface,we find that VIP address use TCP port 80 
to monitor not ICMP
  6.stop iptables service in VM1-1 and VM1-2
  ->the status of VM1-1 and VM1-2 is active
  In summary,"PING" monitor use TCP protocol

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405099] Re: InvalidQuotaValue should use error code 403 instead of 409(conflict)

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405099

Title:
  InvalidQuotaValue should use error code 403 instead of 409(conflict)

Status in neutron:
  Expired

Bug description:
  The neutron InvalidQuotaValue exception extends Conflict
  (HttpConflict, Error code 409)  but that doesn't really make sense for
  this exception. It should be rather 403 (Forbidden).

  The API docs list possible response codes for quota extension
  operations are 401 or 403:

  http://docs.openstack.org/api/openstack-
  network/2.0/content/Update_Quotas.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347880] Re: add_interface_router with an empty dict returns 500 error

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1347880

Title:
  add_interface_router with an empty dict returns 500 error

Status in neutron:
  Expired

Bug description:
  If add_interface_router is called with a router id and a dictionary
  that does not contain the magic keys of 'port_id' or 'subnet_id',
  there is no useful error message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1347880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450874] Re: Delay in network access after instance resize/migration using linuxbridge and vlan

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450874

Title:
  Delay in network access after instance resize/migration using
  linuxbridge and vlan

Status in neutron:
  Expired

Bug description:
  Performing an instance resize which migrates the instance to another
  host. When the new instance gets built up, the new VIF gets plugged,
  however connectivity to the IP is delayed. arping from the neutron
  router gets no response for about a minute. Same with attempts to
  access via a floating IP.

  If a resize is reverted and the instance goes back to the original
  host, connectivity is restored almost instantly.

  I've included some neutron config, let me know if more is desired.

  This is on Juno.

  Neutron.conf (secrets munged):
  [DEFAULT]
  debug = False
  verbose = True

  # Logging #
  log_dir = /var/log/neutron

  agent_down_time = 20

  api_workers = 3

  
  auth_strategy = keystone
  core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
  service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
  allow_overlapping_ips = False

  rabbit_host = 10.233.19.1
  rabbit_port = 5672
  rabbit_userid = openstack
  rabbit_password = MUNGE
  rpc_backend = neutron.openstack.common.rpc.impl_kombu

  bind_host = 0.0.0.0
  bind_port = 9696

  api_paste_config = api-paste.ini

  control_exchange = neutron

  notification_driver = neutron.openstack.common.notifier.no_op_notifier

  notification_topics = notifications

  lock_path = $state_path/lock

  #  neutron nova interactions ==
  notify_nova_on_port_data_changes = True
  notify_nova_on_port_status_changes = True
  nova_url = https://bbg-staging-01.openstack.blueboxgrid.com:8777/v2
  nova_region_name = RegionOne
  nova_admin_username = neutron
  nova_admin_tenant_id = MUNGE
  nova_admin_password = MUNGE
  nova_admin_auth_url = 
https://bbg-staging-01.openstack.blueboxgrid.com:5001/v2.0
  nova_ca_certificates_file = /etc/ssl/certs/ca-certificates.crt

  [QUOTAS]

  [DEFAULT_SERVICETYPE]

  [SECURITYGROUP]

  [AGENT]
  report_interval = 4

  [keystone_authtoken]
  identity_uri = https://bbg-staging-01.openstack.blueboxgrid.com:35358
  auth_uri = https://bbg-staging-01.openstack.blueboxgrid.com:5001/v2.0
  admin_tenant_name = service
  admin_user = neutron
  admin_password = MUNGE
  signing_dir = /var/cache/neutron/api
  cafile = /etc/ssl/certs/ca-certificates.crt

  [DATABASE]
  sqlalchemy_pool_size = 60

  l3_agent.ini:
  [DEFAULT]
  debug = False

  state_path = /var/lib/neutron

  interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

  auth_url = https://bbg-staging-01.openstack.blueboxgrid.com:35358/v2.0
  admin_tenant_name = service
  admin_user = neutron
  admin_password = MUNGE
  metadata_ip = bbg-staging-01.openstack.blueboxgrid.com
  use_namespaces = True
  external_network_bridge =

  [AGENT]
  root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455303] Re: centos7 + firewalld leads to tempest ipv6 failures

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455303

Title:
  centos7 + firewalld leads to tempest ipv6 failures

Status in neutron:
  Expired

Bug description:
  Right now, centos7+neutron fails in tempest due to not setting
  netfilter to run on bridges.  I have a review out for that [1]

  However, correctly enabling these settings leads to two further
  tempest failures

   
tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os
 [79.318876s] ... FAILED
   tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os 
[70.714588s] ... FAILED

  The failure is a lack of ipv6 communication

tempest_lib.exceptions.SSHExecCommandFailed: Command 'ping6 -c1 -w1
  -s56 2003::f816:3eff:fe6d:bd04', exit status: 1, Error:

  I have two runs just executing these tests:

* fail with net.bridge.bridge-nf-call-ip6tables=1 ->
   
http://logs.openstack.org/89/179689/16/check/check-tempest-dsvm-centos7/8d6835c/
* pass with net.bridge.bridge-nf-call-ip6tables=0 ->
   
http://logs.openstack.org/89/179689/17/check/check-tempest-dsvm-centos7/dc97d29/

  However, if I disable firewalld, this starts working
  (http://logs.openstack.org/89/179689/21/check/check-tempest-dsvm-
  centos7/530c356/)

  I note that RDO, redhat's own deployment method, disables firewalld,
  because it uses [2].  So I don't really expect this is high on
  anyone's priority list, but that's where the logs are if we want to
  dig into this

  [1] https://review.openstack.org/180867
  [2] 
https://github.com/redhat-openstack/openstack-puppet-modules/blob/master/firewall/manifests/linux/redhat.pp#L27

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1455303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373633] Re: Nuage syncmanager does not sync external networks with VSD, if gateway is not set

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373633

Title:
  Nuage syncmanager does not sync external networks with VSD, if gateway
  is not set

Status in neutron:
  Expired

Bug description:
  Nuage syncmanager is able to sync an external network
  as a sharednetwork in VSD, if gateway is configured for that
  external network

  But in case when gateway is not configured, syncmanager does
  not pass enough information to the backend to perform the sync.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424722] Re: neuton should instantiate a oslo.messaging transport after fork

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424722

Title:
  neuton should instantiate a oslo.messaging transport after fork

Status in neutron:
  Expired

Bug description:
  As per
  http://docs.openstack.org/developer/oslo.messaging/transport.html,

  "oslo.messaging can’t ensure that forking a process that shares the
  same transport object is safe for the library consumer, because it
  relies on different 3rd party libraries that don’t ensure that. In
  certain cases, with some drivers, it does work"

  In neutron, we initialize transport object before forking workers. We
  do it by calling to neutron.common.rpc:init, which is called from
  neutron.common.config:init, which is called BEFORE we call
  neutron.service:serve_rpc which actually forks workers.

  Note that in neutron case, it's not a matter of moving the
  initialization a bit lower due to the way how plugins are
  instantiated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437140] Re: dhcp agent should reduce 'reload_allocations' times

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1437140

Title:
  dhcp agent should reduce 'reload_allocations' times

Status in neutron:
  Expired

Bug description:
  Now dhcp agent receive message of 'port_update_end', 'port_create_end'
  and 'port_delete_end', it will call driver 'reload_allocations' method
  evrytime, I think it does not 'reload_allocations' evrytime, for
  example, bulk create two ports in one network, it just need reload
  once.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1437140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202865] Re: l3-agent trying to update router after it is deleted in server side

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1202865

Title:
  l3-agent trying to update router after it is deleted in server side

Status in neutron:
  Expired

Bug description:
  l3-agent trying to update router after it is deleted in server side

  I got this error in gating.
  http://paste.openstack.org/show/40858/

  This is not harmful error, however this situation should be prevented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1202865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402908] Re: Create the second vip use one pool id should not return 500 error

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1402908

Title:
  Create the second vip use one pool id should not return 500 error

Status in neutron:
  Expired

Bug description:
  SYMPTOM:
  1.Create the first  vip by using  one pool id.(successfully)
  2.Create the second vip by using  the same pool id.(failure)

  And return "Internal Server Error (HTTP 500)"
  I think it should not return 500 error,it may be 409 error,and with the help 
information in detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1402908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427973] Re: Add icmpv6 and remove None from sg_supported_protocols

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427973

Title:
  Add icmpv6 and remove None from sg_supported_protocols

Status in neutron:
  Expired

Bug description:
  sg_supported_protocols is a list, which contains human-readable protocols
  supported by security group rules, and we should add icmpv6 to it, since
  None is not a kind of protocol, so remove it to be clear.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451388] Re: OVS neutron agent missing ports on Hyper-V

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451388

Title:
  OVS neutron agent missing ports on Hyper-V

Status in neutron:
  Expired

Bug description:
  In order for networking to work on Hyper-V two ports must be added in
  br-tun, "external.1" and "internal" and their associated flows . If
  the OVS neutron agent restarts these two ports and flows get deleted
  when setting up the tunnel bridge and need to be added again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405091] Re: Create the same member the same address, protocol-port and pool id should not return 500 error

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405091

Title:
  Create the same member the same address, protocol-port and pool id
  should not return 500 error

Status in neutron:
  Expired

Bug description:
  SYMPTOM:
  1.Create the first member by using  address, protocol-port and one pool 
id.(successfully)
  2.Create the second member by using the same address, protocol-port and one 
pool id.(failure)

  And return "Internal Server Error (HTTP 500)"
  I think it should not return 500 error,it may be 409 error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413041] Re: for many of endpoint classes, which methods are for RPC is not clear

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413041

Title:
  for many of endpoint classes, which methods are for RPC is not clear

Status in neutron:
  Expired

Bug description:
  as discussed in https://review.openstack.org/#/c/130676/ ,
  for many of RPC endpoint classes, there are no clear separation between 
internal methods and RPC methods.
  heavy uses of mixing made the situation worse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253135] Re: Auto-detect if iproute2 allows clean namespace deletion

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1253135

Title:
  Auto-detect if iproute2 allows clean namespace deletion

Status in neutron:
  Expired

Bug description:
  An idea came up in https://review.openstack.org/#/c/56114 to auto-
  detect whether iproute2 allows clean deletion of namespaces.  It
  wasn't clear how to do this easily at the time.

  I'm filing this bug to potentially follow up on that idea.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1253135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440221] Re: need ipv6 tests for lbaasv2

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440221

Title:
  need ipv6 tests for lbaasv2

Status in neutron:
  Expired

Bug description:
  All of our tests are ipv4, but we should support v6 at this point.
  Let's test it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1440221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456512] Re: vpn and l3 agent has a conflict in icehouse.

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456512

Title:
  vpn and l3 agent has a conflict in icehouse.

Status in neutron:
  Expired

Bug description:
  The test step:
  1. Create subnet named A and B.
  2. Create router named A and B.
  3. Add subnet A to router A, and set gateway for router A. then do same with 
B.
  4. Create vpn A, the vpn subnet  use subnet A, peer gateway use router B's 
gateway, peer subnet use subnet B.
  5. Create vpn B, the vpn subnet  use subnet B, peer gateway use router A's 
gateway, peer subnet use subnet A.

  then test vpn, the subnet A and B can  communicate.

  But after I restart l3 agent or create a firewall( not rule problems)
  in the tenant, the subnet A and B can not communicate.

  I find some issue in the qrouter A or B's iptables nat table:

  vpn use one chain to prevent the SNAT, but after I restart l3 agent or
  create a firewall,  the chain order has been changed.

  like this:

  Chain POSTROUTING (policy ACCEPT 19 packets, 1447 bytes)
  pkts bytes target prot opt in out source destination
  22 1699 neutron-l3-agent-POSTROUTING all – * * 0.0.0.0/0 0.0.0.0/0
  28 2167 neutron-postrouting-bottom all – * * 0.0.0.0/0 0.0.0.0/0
  26 1999 neutron-vpn-agen-POSTROUTING all – * * 0.0.0.0/0 0.0.0.0/0

  Chain neutron-l3-agent-POSTROUTING (1 references)
  pkts bytes target prot opt in out source destination
  0 0 ACCEPT all – !qg-bd458156-6e !qg-bd458156-6e 0.0.0.0/0 0.0.0.0/0 ! 
ctstate DNAT

  Chain neutron-postrouting-bottom (1 references)
  pkts bytes target prot opt in out source destination
  22 1699 neutron-l3-agent-snat all – * * 0.0.0.0/0 0.0.0.0/0
  25 1915 neutron-vpn-agen-snat all – * * 0.0.0.0/0 0.0.0.0/0

  Chain neutron-l3-agent-snat (1 references)
  pkts bytes target prot opt in out source destination
  22 1699 neutron-l3-agent-float-snat all – * * 0.0.0.0/0 0.0.0.0/0
  2 168 SNAT all – * * 111.111.111.0/24 0.0.0.0/0 to:12.12.12.54

  Chain neutron-vpn-agen-POSTROUTING (1 references)
  pkts bytes target prot opt in out source destination
  1 84 ACCEPT all – * * 111.111.111.0/24 123.123.123.0/24 policy match dir out 
pol ipsec
  0 0 ACCEPT all – !qg-bd458156-6e !qg-bd458156-6e 0.0.0.0/0 0.0.0.0/0 ! 
ctstate DNAT

  Chain neutron-vpn-agen-POSTROUTING (1 references)
  pkts bytes target prot opt in out source destination
  1 84 ACCEPT all – * * 111.111.111.0/24 123.123.123.0/24 policy match dir out 
pol ipsec
  0 0 ACCEPT all – !qg-bd458156-6e !qg-bd458156-6e 0.0.0.0/0 0.0.0.0/0 ! 
ctstate DNAT

  so the packet has to snat first, and the vpn  is failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1215018] Re: All the Namespace urls for the current extensions return 404

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1215018

Title:
  All the Namespace urls for the current extensions return 404

Status in neutron:
  Expired

Bug description:
  the API Doc,

  http://docs.openstack.org/api/openstack-
  network/2.0/content/retrieve_extensions.html

  
  List namespace as a field in the response, for listing all extensions. I 
would assume as purposes of documentation for users to have insight into what 
the extension is providing or modifying. Without as a user having to dig 
through the code.

  
   -- It is rather pointless to include a Namespace field if the URL don't 
point to anything, Namepsace urls should be updated with valid URL pointing to 
what I assume should be documentation (or removed if this is not intended).
   
   -- Tests should probably be added to make sure all current possible 
extensions point to a namespace url that returns a 200 with valid content.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1215018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415221] Re: dhcp lease default is unfathomably large

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1415221

Title:
  dhcp lease default is unfathomably large

Status in neutron:
  Expired

Bug description:
  The default lease time Neutron sets right now for the DHCP agent is
  86400. This means that changing the IP of a VM's port could make it
  unreachable for up to 12 hours (assuming it sends a DHCP request at
  half-lease intervals) because the iptables rules are immediately
  updated to only match the new address even though the client is using
  the old address.

  Restarting the VM in this case sucks. Logging into the VM via the
  console is okay as long as password based logins are still enabled,
  but that's still not scriptable.

  IMO this lease time is too large for networks that could have very
  dynamic life-cycles with regards to addressing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1415221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346861] Re: l3 cannot re-create device in deleted namespace

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346861

Title:
  l3 cannot re-create device in deleted namespace

Status in neutron:
  Expired

Bug description:
  If an ovs-managed device (device created by add-port followed by set
  type=internal)'s namespace is being used by some process and then
  deleted, L3 agent will fail to re-create the device.

  Steps to repro:

  - Stop l3-agent.
  - Choose a router namespace with at least one ovs-managed device in it. For 
example, "qrouter-df5a3693-ec4d-4023-9e73-8dce9c4ac184" has a device 
"qg-df5a3693-ec"
  - Ensure the namespace is used by at least one process. For demo purpose, 
start another shell using "ip netns exec 
qrouter-df5a3693-ec4d-4023-9e73-8dce9c4ac184 bash". In reality, 
ns-metadata-proxy or keepalived may live in the namespace
  - Delete the namespace by "ip netns del 
qrouter-df5a3693-ec4d-4023-9e73-8dce9c4ac184". The command won't fail and the 
devices in the deleted namespace are still alive, observable by "ip link" in 
previously opened shell. However, there is no easy method to enter the 
namespace from outside again.
  - Start l3 agent.
  - Verify "qg-df5a3693-ec" cannot be recreated and managed by L3. The 
backtrace looks like (this is our branch, may differ with upstream):

ERROR neutron.agent.l3_agent Failed synchronizing routers
TRACE neutron.agent.l3_agent Traceback (most recent call last):
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 1429, in _sync_routers_task
TRACE neutron.agent.l3_agent self._process_routers(routers, 
all_routers=True)
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 1354, in _process_routers
TRACE neutron.agent.l3_agent self._router_added(r['id'], r)
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 672, in _router_added
TRACE neutron.agent.l3_agent self.process_ha_router_added(ri)
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 923, in 
process_ha_router_added
TRACE neutron.agent.l3_agent vip_cidrs=[gw_ip_cidr])
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 897, in ha_network_added
TRACE neutron.agent.l3_agent prefix=HA_DEV_PREFIX)
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 194, in plug
TRACE neutron.agent.l3_agent ns_dev.link.set_address(mac_address)
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 230, in set_address
TRACE neutron.agent.l3_agent self._as_root('set', self.name, 'address', 
mac_address)
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 217, in _as_root
TRACE neutron.agent.l3_agent kwargs.get('use_root_namespace', False))
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 70, in _as_root
TRACE neutron.agent.l3_agent namespace)
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 81, in _execute
TRACE neutron.agent.l3_agent root_helper=root_helper)
TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 90, in execute
TRACE neutron.agent.l3_agent raise RuntimeError(m)
TRACE neutron.agent.l3_agent RuntimeError: 
TRACE neutron.agent.l3_agent Command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'link', 
'set', 'ha-5bd08318-aa', 'address', 'fa:16:3e:f3:2b:6b']
TRACE neutron.agent.l3_agent Exit code: 1
TRACE neutron.agent.l3_agent Stdout: ''
TRACE neutron.agent.l3_agent Stderr: 'Cannot find device "ha-5bd08318-aa"\n'
TRACE neutron.agent.l3_agent 

  The root cause is that ovs-vsctl "can perform any number of commands
  in a single run, implemented as a single atomic transaction against
  the database." and neutron currently use the following to create ovs-
  managed device:

ovs-vsctl -- --if-exists del-port qr-2f4c613d-b7 -- add-port br-int
  qr-2f4c613d-b7 -- set Interface qr-2f4c613d-b7 type=internal -- set
  Interface qr-2f4c613d-b7 external-ids:iface-id=2f4c613d-
  b7f2-4d63-89c8-af2d48948d19 -- set Interface qr-2f4c613d-b7 external-
  ids:iface-status=active -- set Interface qr-2f4c613d-b7 external-ids
  :attached-mac=fa:16:3e:3c:4d:18

  ovs can delete devices it manages even the device is in a deleted
  (lost) namespace. But if del-port, add-port and set type=internal are
  put together in one ovs-vsctl command, ovs will do nothing to the
  device and the device is left as is.

  
  In 

[Yahoo-eng-team] [Bug 1456722] Re: StrongSwan and dynamic peer: Resolv of host failed

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456722

Title:
  StrongSwan and dynamic peer: Resolv of host failed

Status in neutron:
  Expired

Bug description:
  When adding an IPSEC Site to Site connection with peer fqdn, resolving
  the peer fqdn fails for strongswan. As neutron relies on ip net
  namespaces, the resolv.conf from the neutron node is not used by
  StrongSwan.

  Usually applications that work in ip netns try to use the resolv.conf
  in the net namespace's etc dir and try /etc/ when they cannot find the
  specified file, but it seems strongswan does not follow this
  procedure.

  I added resolv.conf to the template directory of strongswan and
  changed strongswan_ipsec.py:

  - added to strongswan_opts array:

  cfg.StrOpt(
  'resolv_conf_template',
  default=os.path.join(
  TEMPLATE_PATH,
  'template/strongswan/resolv.conf.template'),
  help=_('Template file for resolv configuration.')),

  - added to ensure_configs method:

  self.ensure_config_file(
  'resolv.conf',
  cfg.CONF.strongswan.resolv_conf_template,
  self.vpnservice)

  Sorry - I dont know yet how to commit fixes and I am not even sure if
  that's the correct way :-) But resolv.conf is added to every net ns
  /etc dir and name resolution is working now within strongswan.

  I attached the updated strongswan_ipsec.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1220316] Re: LBaaS: Deal with update operation implicitly changing providers

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1220316

Title:
  LBaaS: Deal with update operation implicitly changing providers

Status in neutron:
  Expired

Bug description:
  In case parent pool is changed for VIP or for member, it is possible that new 
pool has different provider.
  That needs to have special handling.

  Found during review https://review.openstack.org/#/c/40381/15

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1220316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413056] Re: OVS agent supports arp responder for VLAN

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413056

Title:
  OVS agent supports arp responder for VLAN

Status in neutron:
  Expired

Bug description:
  This commit[1] introduces a new agent configuration
  "l2pop_network_types". In ofagent, this configuration is used to
  enable arp responder for non-tunnel network types like VLAN, using the
  l2pop information. I think we can also bring this feature to OVS
  agent.

  [1] https://review.openstack.org/#/c/112947

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1413056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414650] Re: l2pop : fdb_update message sent when modifying the ip of a port which is not ACTIVE

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414650

Title:
  l2pop : fdb_update message sent when modifying the ip of a port which
  is not ACTIVE

Status in neutron:
  Expired

Bug description:
  Currently, when one modify the IP of a port, a fdb message is sent
  even if the port is not ACTIVE.

  This could generate a racy behavior, since agents can receive a
  fdb_update(chg_ip) message while they didn't yet received a fdb_add
  message for this port. Indeed, fdb_add message is sent when the port
  is changing its state to ACTIVE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435971] Re: NoFilterFound for neutron-keepalived-state-change

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435971

Title:
  NoFilterFound for neutron-keepalived-state-change

Status in neutron:
  Expired

Bug description:
  When run neutron HA with devstack, neutron-keepalived-state-change failed to 
spawn due to the error NoFilterMatched
  Traceback (most recent call last):
File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 757, in 
_process_router_update
  self._process_router_if_compatible(router)
File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 705, in 
_process_router_if_compatible
  self._process_added_router(router)
File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 711, in 
_process_added_router
  self._router_added(router['id'], router)
File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 310, in 
_router_added
  self.process_ha_router_added(ri)
File "/opt/stack/neutron/neutron/agent/l3/ha.py", line 169, in 
process_ha_router_added
  ri.spawn_state_change_monitor(self.process_monitor)
File "/opt/stack/neutron/neutron/agent/l3/ha_router.py", line 310, in 
spawn_state_change_monitor
  pm.enable()
File "/opt/stack/neutron/neutron/agent/linux/external_process.py", line 89, 
in enable
  ip_wrapper.netns.execute(cmd, addl_env=self.cmd_addl_env)
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 576, in 
execute
  extra_ok_codes=extra_ok_codes, **kwargs)
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 109, in execute
  execute_rootwrap_daemon(cmd, process_input, addl_env))
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 100, in 
execute_rootwrap_daemon
  return client.execute(cmd, process_input)
File "/usr/local/lib/python2.7/dist-packages/oslo_rootwrap/client.py", line 
135, in execute
  res = proxy.run_one_command(cmd, stdin)
File "", line 2, in run_one_command
File "/usr/lib/python2.7/multiprocessing/managers.py", line 774, in 
_callmethod
  raise convert_to_error(kind, result)
  NoFilterMatched

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1435971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262126] Re: tempest.api.network.admin.test_agent_management.AgentManagementTestXML.test_list_agent[gate, smoke] failed on pg

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262126

Title:
  
tempest.api.network.admin.test_agent_management.AgentManagementTestXML.test_list_agent[gate,smoke]
  failed on pg

Status in neutron:
  Expired

Bug description:
  http://logs.openstack.org/03/60203/2/check/check-tempest-dsvm-neutron-
  pg/f4433f2/

  FAIL: 
tempest.api.network.admin.test_agent_management.AgentManagementTestXML.test_list_agent[gate,smoke]
  2013-12-18 06:25:32.632 | 
tempest.api.network.admin.test_agent_management.AgentManagementTestXML.test_list_agent[gate,smoke]
  2013-12-18 06:25:32.633 | 
--
  2013-12-18 06:25:32.633 | _StringException: Empty attachments:
  2013-12-18 06:25:32.633 |   stderr
  2013-12-18 06:25:32.634 |   stdout
  2013-12-18 06:25:32.634 | 
  2013-12-18 06:25:32.634 | pythonlogging:'': {{{
  2013-12-18 06:25:32.635 | 2013-12-18 06:12:50,544 Request: GET 
http://127.0.0.1:9696//v2.0/agents
  2013-12-18 06:25:32.635 | 2013-12-18 06:12:50,544 Request Headers: 
{'Content-Type': 'application/xml', 'Accept': 'application/xml', 
'X-Auth-Token': ''}
  2013-12-18 06:25:32.635 | 2013-12-18 06:12:50,566 Response Status: 200
  2013-12-18 06:25:32.635 | 2013-12-18 06:12:50,567 Response Headers: 
{'content-length': '4146', 'content-location': 
u'http://127.0.0.1:9696//v2.0/agents', 'date': 'Wed, 18 Dec 2013 06:12:50 GMT', 
'content-type': 'application/xml; charset=UTF-8', 'connection': 'close'}
  2013-12-18 06:25:32.636 | 2013-12-18 06:12:50,567 Response Body: 
  2013-12-18 06:25:32.636 | http://openstack.org/quantum/api/v2.0; 
xmlns:quantum="http://openstack.org/quantum/api/v2.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;>neutron-lbaas-agentTrue2013-12-18 
06:12:46.586472True1bd43287-5180-4c3f-86f0-99d87829f228n-lbaas_agentdevstack-precise-hpcloud-az2-883606Loadbalancer
 agent2013-12-18 
06:04:18.5056332013-12-18 
06:04:18.505633haproxy_ns0neutron-l3-agentTrue2013-12-18 06:12:46.888677True7b2096e6-090e-4c51-9dad-c6d81347dd89l3_agentdevstack-precise-hpcloud-az2-883606L3
 agent2013-12-18 
06:04:18.9490422013-12-18 
06:04:18.949042TrueTrue110neutron.agent.linux.interface.OVSInterfaceDriver1neutron-dhcp-agentTrue2013-12-18 
06:12:49.989886http://www.w3.org/2001/XMLSchema-instance}nil': 'true'}, 'admin_state_up': 
'True', 'heartbeat_timestamp': '2013-12-18 06:12:46.511159', 'alive': 'True', 
'topic': 'N/A', 'host': 'devstack-precise-hpcloud-az2-883606', 'agent_type': 
'Open vSwitch agent', 'created_at': '2013-12-18 06:04:18.458963', 'started_at': 
'2013-12-18 06:04:18.458963', 'id': 'a8ad7e9b-ef64-434a-b7d7-b84118bb796f', 
'configurations': {'tunnel_types': 
{'{http://openstack.org/quantum/api/v2.0}type': 'list'}, 'tunneling_ip': 
'10.7.58.187', 'bridge_mappings': 
{'{http://openstack.org/quantum/api/v2.0}type': 'dict'}, 'l2_population': 
'False', 'devices': '3'}} not in [{'binary': 'neutron-lbaas-agent', 
'description': {'{http://www.w3.org/2001/XMLSchema-instance}nil': 'true'}, 
'admin_state_up': 'True', 'heartbeat_timestamp': '2013-12-18 06:12:46.586472', 
'alive': 'True', 'topic': 'n-lbaas_agent', 'host': 'devstack-precise-hpcl
 oud-az2-883606', 'agent_type': 'Loadbalancer agent', 'created_at': '2013-12-18 
06:04:18.505633', 'started_at': '2013-12-18 06:04:18.505633', 'id': 
'1bd43287-5180-4c3f-86f0-99d87829f228', 'configurations': {'device_drivers': 
{'device_driver': 'haproxy_ns'}, 'instances': '0'}}, {'binary': 
'neutron-l3-agent', 'description': 
{'{http://www.w3.org/2001/XMLSchema-instance}nil': 'true'}, 'admin_state_up': 
'True', 'heartbeat_timestamp': '2013-12-18 06:12:46.888677', 'alive': 'True', 
'topic': 'l3_agent', 'host': 'devstack-precise-hpcloud-az2-883606', 
'agent_type': 'L3 agent', 'created_at': '2013-12-18 06:04:18.949042', 
'started_at': '2013-12-18 06:04:18.949042', 'id': 
'7b2096e6-090e-4c51-9dad-c6d81347dd89', 'configurations': {'router_id': {}, 
'gateway_external_network_id': {}, 'handle_internal_only_routers': 'True', 
'use_namespaces': 'True', 'routers': '1', 'interfaces': '1', 'floating_ips': 
'0', 'interface_driver': 'neutron.agent.linux.interface.OVSInterfaceDriver', 
'ex_gw_ports': '1'}}, {'b
 inary': 'neutron-dhcp-agent', 'description': {}, 'admin_state_up': 'True', 
'heartbeat_timestamp': '2013-12-18 06:12:49.989886', 'alive': 'True', 'topic': 
'dhcp_agent', 'host': 'devstack-precise-hpcloud-az2-883606', 'agent_type': 
'DHCP agent', 'created_at': '2013-12-18 06:04:18.049762', 'started_at': 
'2013-12-18 06:04:18.049762', 'id': 'db5c6434-6141-4ff5-9328-a96bcbe770bc', 
'configurations': {'subnets': '1', 'use_namespaces': 'True', 
'dhcp_lease_duration': '86400', 'dhcp_driver': 
'neutron.agent.linux.dhcp.Dnsmasq', 'networks': '1', 'ports': '2'}}, {'binary': 

[Yahoo-eng-team] [Bug 1194437] Re: Verify that lbaas entity gets a proper status

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1194437

Title:
  Verify that lbaas entity gets a proper status

Status in neutron:
  Expired

Bug description:
  We want to verify that the lbaas entity status field comes from a
  closed set of strings.

  This bug is related to
  https://bugs.launchpad.net/quantum/+bug/1155012. Since it looks like
  we are not going to use enum, we need another way of protection.

  Suggested solution:

  LBAAS_STATUS_SET =
  (ACTIVE,PENDING_CREATE,PENDING_UPDATE,PENDING_DELETE,INACTIVE,ERROR)

  class Vip(model_base.BASEV2, models_v2.HasId, models_v2.HasTenant):
  """Represents a v2 quantum loadbalancer vip."""
  
  status = sa.Column(sa.String(16), nullable=False)
  
  @validates('status')
  def validate_status(self, key, value):
  if value not in LBAAS_STATUS_SET:
  raise ValueError(...)
  return value

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1194437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455005] Re: DVR Cannot open network 'fip' namespace error when updating router's gateway

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455005

Title:
  DVR Cannot open network 'fip' namespace error when updating router's
  gateway

Status in neutron:
  Expired

Bug description:
  When updating a distributed router's gateway when one is already set
  there is an error in l3 agent's log on the node where 'dvr_snat' is
  running.

  Cannot open network namespace "fip-dc7937bc-2627-422b-
  8c71-6779aa675a81": No such file or directory

  2015-05-14 11:39:15.862 21386 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-dc7937bc-2627-422b-8c71-6779aa675a81', 'ip', '-o', 
'link', 'show', 'fpr-43e4f718-e'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
  2015-05-14 11:39:15.942 21386 DEBUG neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-dc7937bc-2627-422b-8c71-6779aa675a81', 'ip', '-o', 
'link', 'show', 'fpr-43e4f718-e']
  Exit code: 1
  Stdin: 
  Stdout: 
  Stderr: Cannot open network namespace 
"fip-dc7937bc-2627-422b-8c71-6779aa675a81": No such file or directory
   execute /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:134
  2015-05-14 11:39:15.943 21386 DEBUG neutron.agent.l3.router_info [-] No 
Interface for floating IPs router: 43e4f718-e0fc-435a-8144-445aa54eeecc 
process_floating_ip_addresses 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py:229

  It seems that it's not related to just this action but also for
  association of floating IP.

  Version
  ==
  python-neutron-2015.1.0-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1455005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373666] Re: Cisco N1KV: Missing tenant id in REST call to controller

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373666

Title:
  Cisco N1KV: Missing tenant id in REST call to controller

Status in neutron:
  Expired

Bug description:
  Bug: Missing tenant id in the create port REST API to VSM (controller).
  The fix is to add the missing tenant id parameter in the REST API

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370317] Re: l2population_rpc_base.py depends on ovs_neutron_agent

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370317

Title:
  l2population_rpc_base.py depends on ovs_neutron_agent

Status in neutron:
  Expired

Bug description:
  l2pop UT (neutron/tests/unit/agent/l2population_rpc_base.py) depends on 
ovs_neutron_agent.
  it's better to make it independent from OVS agent because the l2pop module is 
used by other agents like ofagent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365636] Re: Port deletion issues in embrane plugin

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365636

Title:
  Port deletion issues in embrane plugin

Status in neutron:
  Expired

Bug description:
  After some recent changes in the l3 mixins the embrane plugin is
  leaking ports, because the port deletions are not being handled
  correctly

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360485] Re: Making the servertimeout configurable for nuage plugin via nuage_neutron_plugin.ini config value

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360485

Title:
  Making the servertimeout configurable for nuage plugin via
  nuage_neutron_plugin.ini config value

Status in neutron:
  Expired

Bug description:
  Making the servertimeout configurable from neutron via
  nuage_neutron_plugin.ini config value

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351078] Re: Deleting default net-partition on Nuage VSD and restarting neutron does not clean up old net-partition from openstack

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1351078

Title:
  Deleting default net-partition on Nuage VSD and restarting neutron
  does not clean up old net-partition from openstack

Status in neutron:
  Expired

Bug description:
  Steps to recreate:
  1. Delete the default net-partition on Nuage VSD
  2. Restart neutron (this should recreate the default net-partition and update 
openstack with the new ID)

  -> What is seen is that the new default net-partition is created on
  VSD; however on openstack we still see the old and new net-partition
  and we can no more delete the old net-partition from openstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1351078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360145] Re: ovs-agent: mod-flow shouldn't be used to add flows

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360145

Title:
  ovs-agent: mod-flow shouldn't be used to add flows

Status in neutron:
  Expired

Bug description:
  Although when calling ovs-ofctl mod-flows can be used to add flows,
  but it is not intended, we should avoid the case.

  In ovs-agent, when setup tunnel network, the agent use mod flow to add
  flows, it should be fixed.

   470 if network_type in constants.TUNNEL_NETWORK_TYPES:   

   471 if self.enable_tunneling:
   
   472 # outbound broadcast/multicast   
   
   473 ofports = 
','.join(self.tun_br_ofports[network_type].values())  
   474 if ofports:  
   
   475 self.tun_br.mod_flow(table=constants.FLOOD_TO_TUN,   
   
   476  dl_vlan=lvid,   
   
   477  actions="strip_vlan,"   
   
   478  "set_tunnel:%s,output:%s" % 
   
   479  (segmentation_id, ofports))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427548] Re: SeaMicro plugin decomposition

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427548

Title:
  SeaMicro plugin decomposition

Status in neutron:
  Expired

Bug description:
  This is to track the work on SeaMicro plugin decomposition.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410688] Re: Router interface should not be updated to have more ips

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1410688

Title:
  Router interface should not be updated to have more ips

Status in neutron:
  Expired

Bug description:
  When adding an interface to a router with a port, Neutron will check,
  and if that port has more than one fixed ips, Neutron will reject that
  request. However, we can still update a router port to have more than
  one fixed ips.

  openstack@Openstack-Vega:~$ neutron router-port-list router1
  
+--+--+---+-+
  | id   | name | mac_address   | 
fixed_ips   
|
  
+--+--+---+-+
  | 358c58b8-9b74-4425-b38f-e17a47742488 | testport | fa:16:3e:cb:75:39 | 
{"subnet_id": "ffd2d8ad-7a27-4e59-b78b-508af54d3cb4", "ip_address": "10.0.0.6"} 
|
  
+--+--+---+-+

  openstack@Openstack-Vega:~$ neutron port-update testport --fixed-ips 
type=dict list=true ip_address=10.0.0.6 ip_address=10.0.0.7
  Updated port: testport

  openstack@Openstack-Vega:~$ neutron router-port-list router1
  
+--+--+---+-+
  | id   | name | mac_address   | 
fixed_ips   
|
  
+--+--+---+-+
  | 358c58b8-9b74-4425-b38f-e17a47742488 | testport | fa:16:3e:cb:75:39 | 
{"subnet_id": "ffd2d8ad-7a27-4e59-b78b-508af54d3cb4", "ip_address": "10.0.0.6"} 
|
  |  |  |   | 
{"subnet_id": "ffd2d8ad-7a27-4e59-b78b-508af54d3cb4", "ip_address": "10.0.0.7"} 
|
  
+--+--+---+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1410688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374257] Re: LBaaS API accepts invalid parameters

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374257

Title:
  LBaaS API accepts invalid parameters

Status in neutron:
  Expired

Bug description:
  LBaaS API doesn't check the validity of the input parameters. Creating
  a pool with invalid subnet_id, and updating a pool with invalid
  health_monitors, can both success. The API should return a BadRequest
  response instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377839] Re: Instances won't obtain IPv6 address and default gateway when using stateless DHCPv6 provided by OpenStack

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377839

Title:
  Instances won't obtain IPv6 address and default gateway when using
  stateless DHCPv6 provided by OpenStack

Status in neutron:
  Expired

Bug description:
  Description of problem:
  ===
  I Created an IPv6 subnet with:
  1. ipv6_ra_mode: dhcpv6-stateless
  2. ipv6_address_mode: dhcpv6-stateless

  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2-0.7.b3

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. create a neutron network
  2. create an IPv6 subnet:
  # subnet-create  2001:db1:0::2/64 --name internal_ipv6_a_subnet 
--ipv6-ra-mode dhcpv6-stateless --ipv6-address-mode dhcpv6-stateless 
--ip-version 6
  3. boot an instance with that network

  Actual results:
  ===
  1. Instance did not obtain IPv6 address
  2. default gw is not set

  Expected results:
  =
  The instance should have IPv6 address a default gw configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390920] Re: PortNotFound exception makes the L3 Agent full-sync failed

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1390920

Title:
  PortNotFound exception makes the L3 Agent full-sync failed

Status in neutron:
  Expired

Bug description:
  When the L3 Agent is restarting, the function get_routers will be
  called to get all the routers information which is called full-sync.

  And when the RPC request is excuted in the Neutron Server side,the function 
stacks bellow called:
  
l3_rpc.py:sync_routers-->l3_dvr_db.py:get_sync_data-->l3_dvr_db.py:get_vm_port_hostid-->self._core_plugin.get_port

  So,If the floating ip of the port has been deleted just right,the
  self._core_plugin.get_port will raise exception.

  The exception will not be expected. So this will make the L3 Agent be
  interrupted and it have to sync all the routers next cycle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1390920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392581] Re: ovs_lib should have a function to find br_name from datapath-id

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1392581

Title:
  ovs_lib should have a function to find br_name from datapath-id

Status in neutron:
  Expired

Bug description:
  it's better to have a function to find br_name for the given datapath-
  id.

  background:
  i want the functionality for 
https://blueprints.launchpad.net/neutron/+spec/ofagent-bridge-setup .
  i can just implement it in ofagent.  however, ovs_lib is a better place as it 
can be possibly useful for other users of ovs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1392581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396387] Re: cisco n1kv: segment allocation table retrieves same segment id on concurrent requests

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1396387

Title:
  cisco n1kv: segment allocation table retrieves same segment id on
  concurrent requests

Status in neutron:
  Expired

Bug description:
  When concurrent requests come from multiple tenants for network
  creation, neutron allows to create two network segment with same BD
  (vlan id and vxlan id)

  Expected behaviour :  Should not reuse same bd

  
  [root@macc81f66b8f486 latest_os_sanity(openstack_admin)]# neutron net-show 
164ff80d-4104-4f8c-a610-9ae32bfcab85
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| 164ff80d-4104-4f8c-a610-9ae32bfcab85 |
  | n1kv:member_segments  |  |
  | n1kv:profile_id   | 1b5212d6-ac5b-4a07-9efe-0f97d0a8cae6 |
  | name  | net-t3-p1-s1 |
  | provider:network_type | vlan |
  | provider:physical_network | p1   |
  | provider:segmentation_id  | 1412 |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 4ceded0e-54df-43fd-a229-adc8089ed6e1 |
  | tenant_id | 41c08065e4c247c5b43baff041b20c71 |
  +---+--+
  [root@macc81f66b8f486 latest_os_sanity(openstack_admin)]# neutron net-show 
128d1345-f152-4c18-9005-0fd119c263db
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| 128d1345-f152-4c18-9005-0fd119c263db |
  | n1kv:member_segments  |  |
  | n1kv:profile_id   | 1b5212d6-ac5b-4a07-9efe-0f97d0a8cae6 |
  | name  | net-t2-p1-s1 |
  | provider:network_type | vlan |
  | provider:physical_network | p1   |
  | provider:segmentation_id  | 1412 |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 72ee84b5-fb4e-43c0-85a4-dccc002b7ff9 |
  | tenant_id | d31bcc1cc614426e888f02506c7d6197 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1396387/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399114] Re: when delete the lb vip, the tap device not be deleted

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399114

Title:
  when delete the lb vip, the tap device not be deleted

Status in neutron:
  Expired

Bug description:
  Hi all,
When I delete the lb vip which is ERROR status, the lbaas namespace tap 
device not be delete. so when I add a new vip used the same ip address, then It 
can not access. Because the ip confilict.

 My neutron version is icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404848] Re: tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern failing on Jenkins

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404848

Title:
  
tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
  failing on Jenkins

Status in neutron:
  Expired
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  
tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
  failing on Jenkins.

  Here's the traceback:

  2014-12-22 10:19:41.732 | 
  2014-12-22 10:19:41.732 | Traceback (most recent call last):
  2014-12-22 10:19:41.732 |   File "tempest/test.py", line 112, in wrapper
  2014-12-22 10:19:41.732 | return f(self, *func_args, **func_kwargs)
  2014-12-22 10:19:41.732 |   File 
"tempest/scenario/test_snapshot_pattern.py", line 69, in test_snapshot_pattern
  2014-12-22 10:19:41.732 | server = 
self._boot_image(CONF.compute.image_ref)
  2014-12-22 10:19:41.732 |   File 
"tempest/scenario/test_snapshot_pattern.py", line 45, in _boot_image
  2014-12-22 10:19:41.733 | return self.create_server(image=image_id, 
create_kwargs=create_kwargs)
  2014-12-22 10:19:41.733 |   File "tempest/scenario/manager.py", line 209, 
in create_server
  2014-12-22 10:19:41.733 | status='ACTIVE')
  2014-12-22 10:19:41.733 |   File 
"tempest/services/compute/json/servers_client.py", line 183, in 
wait_for_server_status
  2014-12-22 10:19:41.733 | ready_wait=ready_wait)
  2014-12-22 10:19:41.733 |   File "tempest/common/waiters.py", line 66, in 
wait_for_server_status
  2014-12-22 10:19:41.733 | resp, body = client.get_server(server_id)
  2014-12-22 10:19:41.733 |   File 
"tempest/services/compute/json/servers_client.py", line 142, in get_server
  2014-12-22 10:19:41.733 | resp, body = self.get("servers/%s" % 
str(server_id))
  2014-12-22 10:19:41.733 |   File "tempest/common/rest_client.py", line 
239, in get
  2014-12-22 10:19:41.733 | return self.request('GET', url, 
extra_headers, headers)
  2014-12-22 10:19:41.734 |   File "tempest/common/rest_client.py", line 
450, in request
  2014-12-22 10:19:41.734 | resp, resp_body)
  2014-12-22 10:19:41.734 |   File "tempest/common/rest_client.py", line 
547, in _error_checker
  2014-12-22 10:19:41.734 | raise exceptions.ServerFault(message)
  2014-12-22 10:19:41.734 | ServerFault: Got server fault
  2014-12-22 10:19:41.734 | Details: The server has either erred or is 
incapable of performing the requested operation.
  201

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398433] Re: Cisco CSR1kv router plugin occasionally fails to execute REST calls due to DB timeouts

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398433

Title:
  Cisco CSR1kv router plugin occasionally fails to execute REST calls
  due to DB timeouts

Status in neutron:
  Expired

Bug description:
  Neutron router API REST calls handled by the Cisco CSR1kv router service 
plugin sometimes fails due to DB lock timeouts.
  This is due to unsuitable code workflows in some parts of the implementation 
that can lead to timeouts while waiting for DB locks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399760] Re: neutron-service is throwing an excessive amount of warnings

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399760

Title:
  neutron-service is throwing an excessive amount of warnings

Status in neutron:
  Expired

Bug description:
  http://logs.openstack.org/58/136158/9/check/check-tempest-dsvm-
  neutron-full/abea49f/logs/screen-q-svc.txt.gz?level=TRACE

  Is full of a lot of repetitive warnings, and it is unclear if
  something may be going wrong or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400941] Re: [LBaaS V2] Report information on session_persistence on GET calls for /pools/ not working

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1400941

Title:
  [LBaaS V2] Report information on session_persistence on GET calls for
  /pools/ not working

Status in neutron:
  Expired

Bug description:
  When I make a GET call on the ../pools/ , I get back the
  information on all the attributes as shown below. However the
  information on session_persistence is missing.

  When I originally created the pool, I did not set the
  session_persistence. However the parameter should still be returned
  with an empty or null value , as is happening for
  (description,members) etc

  DEBUG: neutronclient.client
  REQ: curl -i 
http://10.0.1.9:9696/v2.0/lbaas/pools/b4b5a5ae-f098-4173-97ec-bd5682b4df6a.json 
-X GET -H "X-Auth-Token: bb744f089159 4ffcb3b7267b76ffaf44" -H "Content-Type: 
application/json" -H "Accept: application/json" -H "User-Agent: 
python-neutronclient"

  DEBUG: neutronclient.client RESP:200 CaseInsensitiveDict({'date': 'Tue, 09 
Dec 2014 21:35:32 GMT', 'content-length': '279', 'conte
  nt-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-7d792980-5df5-4380-889f-05c418698d93'}) {"pool": {"lb_
  algorithm": "ROUND_ROBIN", "status": "DEFERRED", "protocol": "HTTP", 
"description": "", "admin_state_up": true, "tenant_id": "b952 
fe0f90a24ddba97f5872fa0f42e8", "healthmonitor_id": null, "members": [], "id": 
"b4b5a5ae-f098-4173-97ec-bd5682b4df6a", "name": "poo l1"}}

  +--+--+
  | Field| Value|
  +--+--+
  | admin_state_up   | True |
  | description  |  |
  | healthmonitor_id |  |
  | id   | b4b5a5ae-f098-4173-97ec-bd5682b4df6a |
  | lb_algorithm | ROUND_ROBIN  |
  | members  |  |
  | name | pool1|
  | protocol | HTTP |
  | status   | DEFERRED |
  | tenant_id| b952fe0f90a24ddba97f5872fa0f42e8 |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1400941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424737] Re: repo split in Kilo broke advanced services API documentation

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424737

Title:
  repo split in Kilo broke advanced services API documentation

Status in neutron:
  Expired

Bug description:
  See:
  http://docs.openstack.org/developer/neutron/devref/advanced_services.html

  The pages are empty. This is because documentation is stored in wrong
  repository. It probably belongs to corresponding advanced services
  repos.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416315] Re: delete ip rule in dvr agent failed

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416315

Title:
  delete ip rule in dvr agent failed

Status in neutron:
  Expired

Bug description:
  
  when I create a dvr router, I find the error log in l3 agent:

  2015-01-30 09:04:41.979 26426 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qr
  outer-bd7efd9a-88bb-4d19-bfe9-d5d24ec16eda', 'ip', 'rule', 'del', 'priority', 
'None'] create_process /opt/stack/neutron/neutron/agent/linux/utils.py:46
  2015-01-30 09:04:42.238 26426 ERROR neutron.agent.linux.utils [-]
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-bd7efd9a-88bb-4d19-bfe9-d5d24ec16eda', 'ip', 'rule', 'del', 'priorit
  y', 'None']
  Exit code: 255
  Stdout: ''
  Stderr: 'Error: argument "None" is wrong: preference value is invalid\n\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414232] Re: l3-agent restart fails to remove qrouter namespace

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414232

Title:
  l3-agent restart fails to remove qrouter namespace

Status in neutron:
  Expired

Bug description:
  When a router is removed while a l3-agent is stopped, then started
  again the qrouter namespace will fail to be destroyed because the
  driver returns a 'Device or resource busy' error.   The reason for the
  error is the metadata proxy is still running on the namespace.

  The metadata proxy code has recently been refactored and no longer is
  called in the _destroy_router_namespace() method.  In the use case of
  this bug, there is no ri/router object since it has been removed, only
  the namespace remains.  The new before_router_removed() method
  requires a router object.

  Changes will be required in both the l3-agent code and metadata proxy
  service code to resolve this bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427054] Re: no way to know what IP spoofing rule is applied

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427054

Title:
  no way to know what IP spoofing rule is applied

Status in neutron:
  Expired

Bug description:
  [From the discussion in neutronclient bug 1182629]

  The discussion is that there is no way to confirm and update ip spoofing 
rules (which are established by neutron implicitly).
  The bug itself was reported about two years ago, and I am not sure we need to 
fix it now.
  I think it is still worth discussed when we discuss the next step of the API.

  The following are quoted from neutronclient bug 1182629.
  -

  Robert Collins (lifeless) wrote on 2013-05-23: 
  Sure, I appreciate what the rules do - but the security-group-rule-list is 
showing no details, and the rules that are there are not described usefully. 
The port lists for DHCP in and out for instance, should be shown, but aren't. 
The IP addresses are wildcard for the most part - but not on the ip spoofing 
rule. So I don't understand why they shouldn't be shown in a useful manner.

  Aaron Rosen (arosen) wrote on 2013-05-23: 
  [snip related to the first point]
  The second thing is that in order to use security groups you need ip spoofing 
enabled. The reason for this is if ip spoofing was not enabled an instance 
could change it's source ip in order to get around a security group rule. IMO 
displaying the ip spoofing rules does us no good.

  Robert Collins (lifeless) wrote on 2013-05-25: 
  [snip related to the first point]
  Secondly, ip spoofing is definitely important - but we can modify the DHCP 
rule like so:
-A quantum-openvswi-oaa210549-d -m mac --mac-source FA:16:3E:7F:4F:76 -s 
0.0.0.0/32 -p udp -m udp --sport 68 --dport 67 -j RETURN
  To be more tight: 0.0.0.0/32 is the address for DHCP requests; only that and 
the assigned address may be used.

  Akihiro Motoki (amotoki) wrote on 2013-06-05: 
  [snip related to the first point]
  Regarding the second point, specifying the source MAC actually changes 
nothing since a rule preventing source mac spoofing is evaluated before DHCP 
request allow rule, but it is better to add the source mac since the rules 
becomes more robust (e.g., we can consider a case where there is no rule for 
source mac spoofing).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424378] Re: OVS to ml2 migration doesn't handle ports with no value for binding:host_id

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424378

Title:
  OVS to ml2 migration doesn't handle ports with no value for
  binding:host_id

Status in neutron:
  Expired

Bug description:
  While working on the upgrade process from Havana to Icehouse we
  noticed that the ml2 migration doesn't handle ports with no value for
  binding:host_id.

  I first came across this issue when I had issues with DHCP ports which
  I was able to work around by deleting them and recreating them by
  restarting the dhcp agent.

  The bigger issue came when we moved on to production where we had
  instances that were created back in Folsom and undergone the
  Folsom->Grizzly->Havana->Icehouse upgrades.

  The network for these instances wasn't restored after the upgrade +
  there were a lot of nova-compute exceptions when trying to reboot
  those instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424378/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432856] Re: Security groups aren’t network topology aware

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432856

Title:
  Security groups aren’t network topology aware

Status in neutron:
  Expired

Bug description:
  Security group rules for a host include all hosts that are members of
  the security group even though some can be unaccessible because they
  aren’t attached to the same router. This introduces two problems.
  First, it will create unneeded iptables rules on nodes and additional
  work on neutron-server and agent-side. Second, in the case of
  overlapping networks, the rules that result from a host on a
  completely separate network may end up allowing traffic from an
  untrusted host on the same network.

  e.g. Security group SG1 has rules to allow traffic from other members
  of the same group. Members of SG1 include 10.0.0.2 and 10.0.0.3, which
  are on two separate networks with overlapping IPs. The iptables rules
  on 10.0.0.2 will then permit traffic from 10.0.0.3 even though
  10.0.0.3 could be an untrusted node on its own network.

  Workaround: Use separate security groups per each network. This will
  decrease load from calculations significantly on neutron-server and
  also will decrease number of iptables rules on nodes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1432856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432167] Re: ping getway failed sometimes

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432167

Title:
  ping getway failed sometimes

Status in neutron:
  Expired

Bug description:
   when user didn't use openstack or it's physical host after the
  restart, it's possible that network within the virtual router does not
  work, such as virtual ping gateway is failed,  leading to a virtual
  machine can't and the outside network communications when did not use
  it for a long time or run physical host after the restart.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1432167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432522] Re: weakref ReferenceError not handled in callback manager

2016-09-06 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432522

Title:
  weakref ReferenceError not handled in callback manager

Status in neutron:
  Expired

Bug description:
  How to reproduce:
  1. register a callable 
  2. delete it
  3. notify

  Example output:

  2015-03-16 10:39:07.600 ERROR neutron.callbacks.manager 
[req-0e06ab7e-12f4-4807-bbbe-a05d183a54f5 None None] Error during notification 
for 
dragonflow.neutron.services.l3.l3_controller_plugin.ControllerL3ServicePlugin.dvr_vmarp_table_update
 port, after_update
  2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager Traceback (most 
recent call last):
  2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/callbacks/manager.py", line 143, in _notify_loop
  2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager ReferenceError: 
weakly-referenced object no longer exists
  2015-03-16 10:39:07.600 TRACE neutron.callbacks.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1432522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605060] Re: group panel add user table no attribute email

2016-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/345170
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=7bf5cedd0cb543ce6fdd263b1ccd44acf763c2f6
Submitter: Jenkins
Branch:master

commit 7bf5cedd0cb543ce6fdd263b1ccd44acf763c2f6
Author: zhurong 
Date:   Thu Jul 21 13:39:44 2016 +0800

Fix attribute email doesn't exist error in group panel

Fix Group panel's user table give the attribute email doesn't exist error.

Change-Id: Ie3c5b2223b6bf44486d6b6feb84e919f1f7d6686
Closes-Bug: #1605060


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1605060

Title:
  group panel add user table no attribute email

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Now the User table in group panel, user have no email, give the error:
  The attribute email doesn't exist on http://controller:35357/v3/users/402261c9ada24e48a798e61892c94697'}, 
name=nova, project_id=028de08706f6477b8724b399f8ad07f6>.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1605060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617221] Re: Add the appropriate keyname in the deprecated decorator

2016-09-06 Thread venkatamahesh
** Changed in: keystone
   Status: In Progress => Won't Fix

** Changed in: keystone
 Assignee: venkatamahesh (venkatamaheshkotha) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1617221

Title:
  Add the appropriate keyname in the deprecated decorator

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  From the examples in this url
  
https://github.com/openstack/oslo.log/blob/master/oslo_log/versionutils.py#L96-L118.
  We need to give "as_of"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1617221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620864] [NEW] control size of neutron logs

2016-09-06 Thread Armando Migliaccio
Public bug reported:

>From a recent analysis [1] on a master change, it has been noted that
the size of neutron logs is amongst the biggest in an OpenStack
deployment. This bug report is tracking the effort to trim down some of
the non necessary traces that make the logs bloat, as this may as well
affect operability.

[1] http://paste.openstack.org/show/567259/

** Affects: neutron
 Importance: High
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: In Progress


** Tags: logging

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Armando Migliaccio (armando-migliaccio)

** Changed in: neutron
   Importance: Undecided => High

** Tags added: logging

** Changed in: neutron
Milestone: None => newton-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620864

Title:
  control size of neutron logs

Status in neutron:
  In Progress

Bug description:
  From a recent analysis [1] on a master change, it has been noted that
  the size of neutron logs is amongst the biggest in an OpenStack
  deployment. This bug report is tracking the effort to trim down some
  of the non necessary traces that make the logs bloat, as this may as
  well affect operability.

  [1] http://paste.openstack.org/show/567259/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620211] Re: neutron.conf.sample contains [agent] section

2016-09-06 Thread Armando Migliaccio
Until we remove neutron-debug, this section must stay to allow the use
of neutron-debug on server nodes.

** Changed in: neutron
   Status: In Progress => Won't Fix

** Changed in: neutron
 Assignee: Thomas Bechtold (toabctl) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620211

Title:
  neutron.conf.sample contains [agent] section

Status in neutron:
  Won't Fix

Bug description:
  The neutron.conf.sample file (generated with tox -egenconfig) contains
  the [agent] section but when starting neutron-server, no options from
  the [agent] section are used. So afaics this section can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280100] Re: StringIO.StringIO is incompatible for python 3

2016-09-06 Thread Ken'ichi Ohmichi
Tempest side is already fixed with the other commit.

** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280100

Title:
  StringIO.StringIO is incompatible for python 3

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in oslo-incubator:
  Fix Released
Status in python-neutronclient:
  Invalid
Status in python-openstackclient:
  Invalid
Status in python-tuskarclient:
  Fix Released
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Invalid
Status in tuskar-ui:
  Invalid

Bug description:
  Import StringIO
  StringIO.StringIO()

  should be :
  Import six
  six.StringIO() or six.BytesIO()

  StringIO works for unicode
  BytesIO works for bytes

  For Python3 compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1280100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620845] [NEW] cloud-init-local.service should add Before=NetworkManager

2016-09-06 Thread Bert JW Regeer
Public bug reported:

When using systemd on CentOS 7/RHEL 7 with cloud-init configured to use
ConfigDrive on OpenStack, cloud-init-local needs to have a line added to
it's unit file that makes sure it gets to finish before NetworkManager
is started.

Before=NetworkManager.service

NetworkManager wants target network.target, currently that means it runs
at the same time as cloud-init-local, which means it will start DHCP and
write /etc/resolv.conf while cloud-init-local is parsing the ConfigDrive
and is writing the appropriate ifcfg-* files.

This causes a race condition with who gets to modify /etc/resolv.conf.
NetworkManager does not take kindly to that race.

When NetworkManager notices the new ifcfg-* files it will turn off DHCP
and mark the interfaces "unmanaged", however it doesn't notice the
change made to /etc/resolv.conf.

Related to the changes made here:

https://code.launchpad.net/~bregeer-ctl/cloud-init/+git/cloud-
init/+merge/305058

If NetworkManager is started after cloud-init-local, then NetworkManager
will correctly see that  /etc/resolv.conf is not to be touched by it.

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: centos rhel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1620845

Title:
  cloud-init-local.service should add Before=NetworkManager

Status in cloud-init:
  New

Bug description:
  When using systemd on CentOS 7/RHEL 7 with cloud-init configured to
  use ConfigDrive on OpenStack, cloud-init-local needs to have a line
  added to it's unit file that makes sure it gets to finish before
  NetworkManager is started.

  Before=NetworkManager.service

  NetworkManager wants target network.target, currently that means it
  runs at the same time as cloud-init-local, which means it will start
  DHCP and write /etc/resolv.conf while cloud-init-local is parsing the
  ConfigDrive and is writing the appropriate ifcfg-* files.

  This causes a race condition with who gets to modify /etc/resolv.conf.
  NetworkManager does not take kindly to that race.

  When NetworkManager notices the new ifcfg-* files it will turn off
  DHCP and mark the interfaces "unmanaged", however it doesn't notice
  the change made to /etc/resolv.conf.

  Related to the changes made here:

  https://code.launchpad.net/~bregeer-ctl/cloud-init/+git/cloud-
  init/+merge/305058

  If NetworkManager is started after cloud-init-local, then
  NetworkManager will correctly see that  /etc/resolv.conf is not to be
  touched by it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1620845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620430] Re: Test failures in "horizon" suite not properly detected by tox

2016-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/365812
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=63ddedf69d1d2d90e6da747c9e77ca7255f99429
Submitter: Jenkins
Branch:master

commit 63ddedf69d1d2d90e6da747c9e77ca7255f99429
Author: Richard Jones 
Date:   Tue Sep 6 11:20:44 2016 +1000

Fix error detection in horizon test suite

Also, simplify tox environment by reducing repeated definition of the same
test command across all of the "test suite" environments.

Change-Id: Icbe7558b973dcd1ef50c85cdefc02e165b5bdc7c
Closes-Bug: 1620430


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1620430

Title:
  Test failures in "horizon" suite not properly detected by tox

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The tox test runner command does not properly detect errors in the
  horizon test suite during a full test run. This is repeatable by
  modifying any horizon test code to fail, and you will see that the
  outcome of the full "tox -e py27" test run is success.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1620430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620841] [NEW] uuid_list validator regression

2016-09-06 Thread Henry Gessau
Public bug reported:

Change https://review.openstack.org/358088 broke validate_uuid_list()

** Affects: neutron
 Importance: Undecided
 Assignee: Henry Gessau (gessau)
 Status: In Progress


** Tags: lib

** Changed in: neutron
 Assignee: (unassigned) => Henry Gessau (gessau)

** Tags added: lib

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620841

Title:
  uuid_list validator regression

Status in neutron:
  In Progress

Bug description:
  Change https://review.openstack.org/358088 broke validate_uuid_list()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620835] [NEW] Add timestamp fields for neutron ext resources

2016-09-06 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/312873
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 17b88cd4539cd5fa096115b76fd4a21036395360
Author: ZhaoBo 
Date:   Thu May 5 17:16:23 2016 +0800

Add timestamp fields for neutron ext resources

Propose a new extension named "timestamp_ext" to add timestamp to
neutron ext resources like 
router/floatingip/security_group/security_group_rule.

APIImpact
DocImpact: Neutron ext resources now contain 'timestamp' fields like
   'created_at' and 'updated_at'
Implements: blueprint add-neutron-extension-resource-timestamp

Change-Id: I78b00516e31ce83376d37f57299b2229b6fb8fcf

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620835

Title:
  Add timestamp fields for neutron ext resources

Status in neutron:
  New

Bug description:
  https://review.openstack.org/312873
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 17b88cd4539cd5fa096115b76fd4a21036395360
  Author: ZhaoBo 
  Date:   Thu May 5 17:16:23 2016 +0800

  Add timestamp fields for neutron ext resources
  
  Propose a new extension named "timestamp_ext" to add timestamp to
  neutron ext resources like 
router/floatingip/security_group/security_group_rule.
  
  APIImpact
  DocImpact: Neutron ext resources now contain 'timestamp' fields like
 'created_at' and 'updated_at'
  Implements: blueprint add-neutron-extension-resource-timestamp
  
  Change-Id: I78b00516e31ce83376d37f57299b2229b6fb8fcf

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620833] [NEW] Cannot add member to a project while using db.simple.api

2016-09-06 Thread Timothy Symanczyk
Public bug reported:

1) Fresh devstack install, all origin master
2) Configure glance-api.conf with "data_api = glance.db.simple.api"
3) Configure glance-api.conf with "workers = 1" to avoid bug 1619508
4) Create new image
5) Attempt to add the devstack-default project "alt_demo" as a member to same 
image
6) Observe 500 client response

timothy_symanczyk@devstack:~/becomes/DEVSTACK$ openstack project list
WARNING: openstackclient.common.utils is deprecated and will be removed after 
Jun 2017. Please use osc_lib.utils
+--++
| ID   | Name   |
+--++
| 1f4fd1f8372b4eec9c06eae601db2fce | demo   |
| 694a9818fe9d4767a4072bd05d0ea14d | invisible_to_admin |
| 81ae90e13f194490885ca952f66ce703 | service|
| 9b6f0b5e9bbb46af979b11b1c1bf1f26 | admin  |
| ae4b292ddafb43e0bda12417b43c2d0e | alt_demo   |
+--++
timothy_symanczyk@devstack:~/becomes/DEVSTACK$ openstack image create some_image
WARNING: openstackclient.common.utils is deprecated and will be removed after 
Jun 2017. Please use osc_lib.utils
+--+--+
| Field| Value|
+--+--+
| checksum | None |
| container_format | bare |
| created_at   | 2016-09-06T21:17:57Z |
| disk_format  | raw  |
| file | /v2/images/85154d8c-6c85-4540-bcde-2ee7ff2ca4c0/file |
| id   | 85154d8c-6c85-4540-bcde-2ee7ff2ca4c0 |
| min_disk | 0|
| min_ram  | 0|
| name | some_image   |
| owner| 9b6f0b5e9bbb46af979b11b1c1bf1f26 |
| protected| False|
| schema   | /v2/schemas/image|
| size | None |
| status   | queued   |
| tags |  |
| updated_at   | 2016-09-06T21:17:57Z |
| virtual_size | None |
| visibility   | private  |
+--+--+
timothy_symanczyk@devstack:~/becomes/DEVSTACK$ openstack --debug image add 
project some_image alt_demo
WARNING: openstackclient.common.utils is deprecated and will be removed after 
Jun 2017. Please use osc_lib.utils
START with options: [u'--debug', u'image', u'add', u'project', u'some_image', 
u'alt_demo']
options: Namespace(access_key='', access_secret='***', access_token='***', 
access_token_endpoint='', access_token_type='', aodh_endpoint='', auth_type='', 
auth_url='http://devstack:5000/v3', authorization_code='', cacert=None, 
cert='', client_id='', client_secret='***', cloud='', consumer_key='', 
consumer_secret='***', debug=True, default_domain='default', 
default_domain_id='', default_domain_name='', deferred_help=False, 
discovery_endpoint='', domain_id='', domain_name='', endpoint='', 
identity_provider='', identity_provider_url='', insecure=None, 
inspector_api_version='1', inspector_url=None, interface='', key='', 
log_file=None, old_profile=None, openid_scope='', os_alarming_api_version='2', 
os_application_catalog_api_version='1', os_baremetal_api_version='1.6', 
os_beta_command=False, os_clustering_api_version='1', 
os_compute_api_version='', os_data_processing_api_version='1.1', 
os_data_processing_url='', os_dns_api_version='2', os_identity_api_version='3', 
os_image_api_version='',
  os_key_manager_api_version='1', os_network_api_version='', 
os_object_api_version='', os_orchestration_api_version='1', 
os_policy_api_version='1', os_project_id=None, os_project_name=None, 
os_queues_api_version='1.1', os_search_api_version='1', 
os_translator_api_version='1', os_volume_api_version='', 
os_workflow_api_version='2', passcode='', password='***', profile=None, 
project_domain_id='', project_domain_name='default', project_id='', 
project_name='admin', protocol='', redirect_uri='', region_name='', roles='', 
timing=False, token='***', trust_id='', url='', user_domain_id='', 
user_domain_name='default', user_id='', username='admin', verbose_level=3, 
verify=None)
Auth plugin password selected
auth_config_hook(): {'auth_type': 'password', 

[Yahoo-eng-team] [Bug 1620824] [NEW] Neutron DVR(SNAT) steals FIP traffic

2016-09-06 Thread David-wahlstrom
Public bug reported:

Setup:

We have 40+ compute nodes, all running neutron-l3-agent in DVR mode.  We
also have 1 node running neutron-l3-agent in DVR_SNAT mode.  L2
population is happening with VXFLD
(https://github.com/CumulusNetworks/vxfld).

Steps to reproduce:

After following the setup above, we noticed that traffic going to/from a
floating IP was randomly going out the SNAT namespace (and thus getting
connection resets).  Further investigation showed this was
related/correlated to traffic load, meaning, the more traffic, the more
likely the return path would go out the SNAT namespace instead of back
out the FIP namespace.  After some searching, we found that conntrack
was marking in-transit connections as "new" connections (losing their
state, essentially) and thus the SNAT namespace would see this as new
traffic and setup a new return path.

** Affects: neutron
 Importance: Undecided
 Assignee: David-wahlstrom (david-wahlstrom)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620824

Title:
  Neutron DVR(SNAT) steals FIP traffic

Status in neutron:
  In Progress

Bug description:
  Setup:

  We have 40+ compute nodes, all running neutron-l3-agent in DVR mode.
  We also have 1 node running neutron-l3-agent in DVR_SNAT mode.  L2
  population is happening with VXFLD
  (https://github.com/CumulusNetworks/vxfld).

  Steps to reproduce:

  After following the setup above, we noticed that traffic going to/from
  a floating IP was randomly going out the SNAT namespace (and thus
  getting connection resets).  Further investigation showed this was
  related/correlated to traffic load, meaning, the more traffic, the
  more likely the return path would go out the SNAT namespace instead of
  back out the FIP namespace.  After some searching, we found that
  conntrack was marking in-transit connections as "new" connections
  (losing their state, essentially) and thus the SNAT namespace would
  see this as new traffic and setup a new return path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620722] Re: @property methods in Managers are cached

2016-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/364562
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=83e3c00809b38d26fd43a29fe2581c19731ae57e
Submitter: Jenkins
Branch:master

commit 83e3c00809b38d26fd43a29fe2581c19731ae57e
Author: David Stanek 
Date:   Thu Sep 1 21:28:06 2016 +

Only cache callables in the base manager

The base manager had an issue where if a property was accessed through the
__getattr__ it would be cached.

Closes-Bug: 1620722
Change-Id: Iad7ca87a30fd5fa9f8bc88a0c7f74acca2ae1a56


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1620722

Title:
  @property methods in Managers are cached

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When working on the credential encryption spec, I found that one of
  the @property methods in the implementation was having it's value
  cached. Typical @property methods should be run every time they are
  called. This was not the case in the credential encryption
  implementation because we override the __getattr__ method in our base
  Manager class [0].

  We should modify that method so that @property methods can be used
  when inheriting from the common Manager.

  [0]
  
https://github.com/openstack/keystone/blob/b47f10290ed83415149f3d2ab6b0dc64646e578a/keystone/common/manager.py#L185-L189

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1620722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620807] [NEW] resolv.conf on CentOS 7/RHEL 7 is NetworkManager managed

2016-09-06 Thread Bert JW Regeer
Public bug reported:

Related to https://bugs.launchpad.net/cloud-init/+bug/1620796 I have
found that even when I have fixed cloud-init to properly write
resolv.conf it gets overwritten as soon as network is started because
NetworkManager takes over control:

/etc/resolv.conf

# Generated by NetworkManager
search novalocal


# No nameservers found; try putting DNS servers into your
# ifcfg files in /etc/sysconfig/network-scripts like so:
#
# DNS1=xxx.xxx.xxx.xxx
# DNS2=xxx.xxx.xxx.xxx
# DOMAIN=lab.foo.com bar.foo.com

instead any DNS entries should be added directly to the ifcfg-eth* files
that are generated, then let NM figure out what DNS resolvers it wants
to add to /etc/resolv.conf.

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: centos rhel

** Tags added: rhel

** Tags added: centos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1620807

Title:
  resolv.conf on CentOS 7/RHEL 7 is NetworkManager managed

Status in cloud-init:
  New

Bug description:
  Related to https://bugs.launchpad.net/cloud-init/+bug/1620796 I have
  found that even when I have fixed cloud-init to properly write
  resolv.conf it gets overwritten as soon as network is started because
  NetworkManager takes over control:

  /etc/resolv.conf

  # Generated by NetworkManager
  search novalocal

  
  # No nameservers found; try putting DNS servers into your
  # ifcfg files in /etc/sysconfig/network-scripts like so:
  #
  # DNS1=xxx.xxx.xxx.xxx
  # DNS2=xxx.xxx.xxx.xxx
  # DOMAIN=lab.foo.com bar.foo.com

  instead any DNS entries should be added directly to the ifcfg-eth*
  files that are generated, then let NM figure out what DNS resolvers it
  wants to add to /etc/resolv.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1620807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620796] [NEW] Don't fail if attempting to add more resolvers

2016-09-06 Thread Bert JW Regeer
Public bug reported:

I have multiple network interfaces in OpenStack. Each defines 2
resolvers. This goes over 3 resolvers and on RHEL that leads to the
following traceback:

Sep 06 13:26:03 testing02.novalocal cloud-init[362]: [CLOUDINIT] 
util.py[DEBUG]: Read 140 bytes from /etc/resolv.conf
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: [CLOUDINIT] 
util.py[WARNING]: failed stage init-local
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: 2016-09-06 13:26:03,786 - 
util.py[WARNING]: failed stage init-local
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: [CLOUDINIT] 
util.py[DEBUG]: failed stage init-local
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: failed run of stage 
init-local
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: 

Sep 06 13:26:03 testing02.novalocal cloud-init[362]: Traceback (most recent 
call last):
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/cmd/main.py",
 line 530, in status_wrapper
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: ret = functor(name, args)
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/cmd/main.py",
 line 277, in main_init
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: 
init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/stages.py",
 line 652, in apply_network_config
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: return 
self.distro.apply_network_config(netcfg, bring_up=bring_up)
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/distros/__init__.py",
 line 162, in apply_network_config
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: dev_names = 
self._write_network_config(netconfig)
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/distros/rhel.py",
 line 71, in _write_network_config
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: 
self._net_renderer.render_network_state("/", ns)
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/net/sysconfig.py",
 line 395, in render_network_state
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: existing_dns_path=dns_path)
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/net/sysconfig.py",
 line 338, in _render_dns
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: 
content.add_nameserver(nameserver)
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: File 
"/usr/lib/python2.7/site-packages/cloud_init-0.7.7-py2.7.egg/cloudinit/distros/parsers/resolv_conf.py",
 line 96, in add_nameserver
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: "'3' maximum name 
servers") % (ns))
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: ValueError: Adding 
u'172.17.48.4' would go beyond the '3' maximum name servers
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: 

Sep 06 13:26:03 testing02.novalocal cloud-init[362]: [CLOUDINIT] 
util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: [CLOUDINIT] 
util.py[DEBUG]: Read 10 bytes from /proc/uptime
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: [CLOUDINIT] 
util.py[DEBUG]: cloud-init mode 'init' took 0.509 seconds (0.51)
Sep 06 13:26:03 testing02.novalocal cloud-init[362]: [CLOUDINIT] 
handlers.py[DEBUG]: finish: init-local: SUCCESS: searching for local datasources
Sep 06 13:26:03 testing02.novalocal systemd[1]: cloud-init-local.service: main 
process exited, code=exited, status=1/FAILURE
Sep 06 13:26:03 testing02.novalocal systemd[1]: Failed to start Initial 
cloud-init job (pre-networking).
Sep 06 13:26:03 testing02.novalocal systemd[1]: Unit cloud-init-local.service 
entered failed state.
Sep 06 13:26:03 testing02.novalocal systemd[1]: cloud-init-local.service failed.

On Ubuntu the name servers are added to /etc/network/interfaces.d/50
-cloud-init.cfg and resolvconf deals with setting the appropriate
amount.

I believe that instead of raising a ValueError it should instead just
accept the first three it receives. There is currently not a change I
can make in my cloud to limit the amount of resolvers sent down because
not all VM's are on both networks, and having a minimal of two DNS
servers is still best practice.

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: centos rhel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to 

[Yahoo-eng-team] [Bug 1587683] Re: Remove admin role name 'admin' hardcode

2016-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/364810
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=deb4cc1268d172ec4ed5b1c29f14156b7187902d
Submitter: Jenkins
Branch:master

commit deb4cc1268d172ec4ed5b1c29f14156b7187902d
Author: Paul Karikh 
Date:   Fri Sep 2 12:54:37 2016 +0300

Add releasenotes for bug #1161144

Change-Id: Iea2e293069c57bef03d0ab12e504597bcc7e3654
Closes-Bug: #1587683


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1587683

Title:
  Remove admin role name 'admin' hardcode

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  https://review.openstack.org/123741
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/horizon" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit ce5fb26bf5f431f0cdaa6860a732338db868a8fb
  Author: Paul Karikh 
  Date:   Tue Sep 30 14:53:21 2014 +0400

  Remove admin role name 'admin' hardcode
  
  Because of hardcoding name as the 'admin' was impossible to
  use administrative panel with a custom administrative role name.
  This fix replaces hardcoding the name of the administrative role
  with RBAC policy check.
  
  DocImpact
  Related commit: https://review.openstack.org/#/c/123745/
  Change-Id: I05c8fc750c56f6f6bb49a435662e821eb0d6ba30
  Closes-Bug: #1161144

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1587683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593890] Re: Volume type tab shows an error if volume type encryption is disabled

2016-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/331380
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e8298d90a450f45f01559314042584f320c5bb9e
Submitter: Jenkins
Branch:master

commit e8298d90a450f45f01559314042584f320c5bb9e
Author: Ying Zuo 
Date:   Fri Jun 17 16:50:49 2016 -0700

Check if volume type encryption is enabled before retrieving the data

Change-Id: I4338acf0f22e6639e29a3d2d4e589bdef8a0c10c
Closes-bug: #1593890


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1593890

Title:
   Volume type tab shows an error if  volume type encryption is disabled

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Steps to reproduce:

  1. Disable volume type encryption on cinder_policy.json as the following
  "volume_extension:volume_type_encryption": "!",
  "volume_extension:volume_encryption_metadata": "!",

  2. Go to admin -> volumes panel
  3. Click on Volume Types tab
  4. Verify the volume type encryption is disabled by check the "Create 
Encryption" action is not available for the volume type.
  5. Note that there's an error showing unable to retrieve volume type 
encryption information.

  The volume types table has an encryption column and it assumes the
  encryption is always enabled which is not the case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1593890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1578897] Re: OVS: Add support for IPv6 addresses as tunnel endpoints

2016-09-06 Thread Armando Migliaccio
The proposed fix (https://review.openstack.org/#/c/352027/) merged.

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: openstack-manuals
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1578897

Title:
  OVS: Add support for IPv6 addresses as tunnel endpoints

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/257335
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 773394a1887bec6ab4c2ff0308f0e830e9a9089f
  Author: Frode Nordahl 
  Date:   Mon Dec 14 13:51:48 2015 +0100

  OVS: Add support for IPv6 addresses as tunnel endpoints
  
  Remove IPv4 restriction for local_ip configuration statement.
  
  Check for IP version mismatch of local_ip and remote_ip before creating
  tunnel.
  
  Create hash of remote IPv6 address for OVS interface/port name with least
  posibility for collissions.
  
  Fix existing tests that fail because of the added check for IP version
  and subsequently valid IP addresses in _setup_tunnel_port.
  
  DocImpact
  
  Change-Id: I9ec137ef8c688b678a0c61f07e9a01382acbeb13
  Closes-Bug: #1525895

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1578897/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602403] Re: OVS: Add support for IPv6 addresses as tunnel endpoints

2016-09-06 Thread Armando Migliaccio
Nothing to add to in-tree devref documentation.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602403

Title:
  OVS: Add support for IPv6 addresses as tunnel endpoints

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/318318
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit c3892e4df14c151d4f5ce3bf01f26b7807651dd0
  Author: Frode Nordahl 
  Date:   Mon Dec 14 13:51:48 2015 +0100

  OVS: Add support for IPv6 addresses as tunnel endpoints
  
  Remove IPv4 restriction for local_ip configuration statement.
  
  Check for IP version mismatch of local_ip and remote_ip before creating
  tunnel.
  
  Create hash of remote IPv6 address for OVS interface/port name with least
  posibility for collissions.
  
  Fix existing tests that fail because of the added check for IP version
  and subsequently valid IP addresses in _setup_tunnel_port.
  
  DocImpact
  
  Conflicts:
neutron/tests/common/agents/ovs_agent.py
  
  Change-Id: I9ec137ef8c688b678a0c61f07e9a01382acbeb13
  Closes-Bug: #1525895

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1602403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620764] [NEW] migration test fails on table addition

2016-09-06 Thread Alexander Makarov
Public bug reported:

If expand repo migration adds a table, corresponding unit test fails
attempting to access created table with the error "Table does not exist"
[0]

[0] http://logs.openstack.org/88/208488/51/check/gate-keystone-python27
-db-ubuntu-xenial/81311f3/console.html#_2016-09-06_14_27_49_936937

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1620764

Title:
  migration test fails on table addition

Status in OpenStack Identity (keystone):
  New

Bug description:
  If expand repo migration adds a table, corresponding unit test fails
  attempting to access created table with the error "Table does not
  exist" [0]

  [0] http://logs.openstack.org/88/208488/51/check/gate-keystone-
  python27-db-ubuntu-
  xenial/81311f3/console.html#_2016-09-06_14_27_49_936937

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1620764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620761] [NEW] test_create_second_image_when_first_image_is_being_saved intermittently times out in teardown in cells v1 job

2016-09-06 Thread Matt Riedemann
Public bug reported:

I've been noticing this failure more often lately:
2016-09-02 17:06:30.570025 | 
tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON.test_create_second_image_when_first_image_is_being_saved[id-0460efcf-ee88-4f94-acef-1bf658695456,negative]
2016-09-02 17:06:30.570109 | 

2016-09-02 17:06:30.570116 | 
2016-09-02 17:06:30.570128 | Captured traceback:
2016-09-02 17:06:30.570140 | ~~~
2016-09-02 17:06:30.570158 | Traceback (most recent call last):
2016-09-02 17:06:30.570194 |   File 
"tempest/api/compute/images/test_images_oneserver_negative.py", line 38, in 
tearDown
2016-09-02 17:06:30.570211 | self.server_check_teardown()
2016-09-02 17:06:30.570241 |   File "tempest/api/compute/base.py", line 
164, in server_check_teardown
2016-09-02 17:06:30.570267 | cls.server_id, 'ACTIVE')
2016-09-02 17:06:30.570295 |   File "tempest/common/waiters.py", line 95, 
in wait_for_server_status
2016-09-02 17:06:30.570315 | raise exceptions.TimeoutException(message)
2016-09-02 17:06:30.570337 | tempest.exceptions.TimeoutException: Request 
timed out
2016-09-02 17:06:30.570429 | Details: 
(ImagesOneServerNegativeTestJSON:tearDown) Server 
051f6d7d-15b3-459c-a372-902c5da15b40 failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: ACTIVE. Current 
task state: image_snapshot.

There are no clear failures from the nova logs from what I see. I'm also
not sure if we regressed something that is making this failure more
often in the cells v1 job, but cells v1 is inherently racy so I wouldn't
be surprised.

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Details%3A%20(ImagesOneServerNegativeTestJSON%3AtearDown)%20Server%5C%22%20AND%20message%3A%5C%22failed%20to%20reach%20ACTIVE%20status%20and%20task%20state%20%5C%5C%5C%22None%5C%5C%5C%22%20within%20the%20required%20time%5C%22%20AND%20message%3A%5C%22Current%20status%3A%20ACTIVE.%20Current%20task%20state%3A%20image_snapshot.%5C%22%20AND%20build_name%3A%5C
%22gate-tempest-dsvm-cells%5C%22=7d

** Affects: nova
 Importance: Undecided
 Status: Confirmed


** Tags: cells

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620761

Title:
  test_create_second_image_when_first_image_is_being_saved
  intermittently times out in teardown in cells v1 job

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  I've been noticing this failure more often lately:
  2016-09-02 17:06:30.570025 | 
tempest.api.compute.images.test_images_oneserver_negative.ImagesOneServerNegativeTestJSON.test_create_second_image_when_first_image_is_being_saved[id-0460efcf-ee88-4f94-acef-1bf658695456,negative]
  2016-09-02 17:06:30.570109 | 

  2016-09-02 17:06:30.570116 | 
  2016-09-02 17:06:30.570128 | Captured traceback:
  2016-09-02 17:06:30.570140 | ~~~
  2016-09-02 17:06:30.570158 | Traceback (most recent call last):
  2016-09-02 17:06:30.570194 |   File 
"tempest/api/compute/images/test_images_oneserver_negative.py", line 38, in 
tearDown
  2016-09-02 17:06:30.570211 | self.server_check_teardown()
  2016-09-02 17:06:30.570241 |   File "tempest/api/compute/base.py", line 
164, in server_check_teardown
  2016-09-02 17:06:30.570267 | cls.server_id, 'ACTIVE')
  2016-09-02 17:06:30.570295 |   File "tempest/common/waiters.py", line 95, 
in wait_for_server_status
  2016-09-02 17:06:30.570315 | raise 
exceptions.TimeoutException(message)
  2016-09-02 17:06:30.570337 | tempest.exceptions.TimeoutException: Request 
timed out
  2016-09-02 17:06:30.570429 | Details: 
(ImagesOneServerNegativeTestJSON:tearDown) Server 
051f6d7d-15b3-459c-a372-902c5da15b40 failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: ACTIVE. Current 
task state: image_snapshot.

  There are no clear failures from the nova logs from what I see. I'm
  also not sure if we regressed something that is making this failure
  more often in the cells v1 job, but cells v1 is inherently racy so I
  wouldn't be surprised.

  

[Yahoo-eng-team] [Bug 1620748] [NEW] In placement when an attempt is made to write to missing inventory the error message is ugly

2016-09-06 Thread Chris Dent
Public bug reported:

The error message from the exception is:

Inventory for 'set([0, 2])' on resource provider 'set(['a7774b97
-838c-4b36-9cda-cfe6cbba0f0f'])' invalid

This is because the data given to the exception has not been stringified
from sets nor turned from resource class ids to resource class strings.
Change needed near here:

https://github.com/openstack/nova/blob/985c7ca4dc15176dc9cccf0ebcabaa18ea98ca2a/nova/objects/resource_provider.py#L715

** Affects: nova
 Importance: Undecided
 Assignee: Chris Dent (cdent)
 Status: New


** Tags: api placement scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620748

Title:
  In placement when an attempt is made to write to missing inventory the
  error message is ugly

Status in OpenStack Compute (nova):
  New

Bug description:
  The error message from the exception is:

  Inventory for 'set([0, 2])' on resource provider 'set(['a7774b97
  -838c-4b36-9cda-cfe6cbba0f0f'])' invalid

  This is because the data given to the exception has not been
  stringified from sets nor turned from resource class ids to resource
  class strings. Change needed near here:

  
https://github.com/openstack/nova/blob/985c7ca4dc15176dc9cccf0ebcabaa18ea98ca2a/nova/objects/resource_provider.py#L715

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1620748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620746] [NEW] Dead code and model remain for availability ranges

2016-09-06 Thread Carl Baldwin
Public bug reported:

Availability range models and code are effectively obsolete [1] and should've 
been removed
in a previous patch [2] but some of it was left behind.

[1] https://review.openstack.org/#/c/292207
[2] https://review.openstack.org/#/c/303638

** Affects: neutron
 Importance: Low
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

** Changed in: neutron
Milestone: None => newton-rc1

** Changed in: neutron
 Assignee: (unassigned) => Carl Baldwin (carl-baldwin)

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620746

Title:
  Dead code and model remain for availability ranges

Status in neutron:
  In Progress

Bug description:
  Availability range models and code are effectively obsolete [1] and should've 
been removed
  in a previous patch [2] but some of it was left behind.

  [1] https://review.openstack.org/#/c/292207
  [2] https://review.openstack.org/#/c/303638

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620736] [NEW] Incorrect SQL in placement API causes spurious InvalidAllocationCapacityExceeded error

2016-09-06 Thread Sean Dague
Public bug reported:

Upstream master

The SQL that joins the allocations to the inventory for
_check_capacity_exceeded was incorrect. It was doing a left outer join
with the allocations which meant we got resource accounting in an NxN
matrix with all inventory.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
2016-09-06 12:12:10.978 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 0, 4, 0, 16.0, Decimal('0')) 
from (pid=32242) _check_capacity_exceeded 
/opt/stack/nova/nova/objects/resource_provider.py:706
2016-09-06 12:12:10.979 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 0, 4, 0, 16.0, Decimal('11')) 
from (pid=32242) _check_capacity_exceeded 
/opt/stack/nova/nova/objects/resource_provider.py:706
2016-09-06 12:12:10.979 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 0, 4, 0, 16.0, 
Decimal('704')) from (pid=32242) _check_capacity_exceeded 
/opt/stack/nova/nova/objects/resource_provider.py:706
2016-09-06 12:12:10.980 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 1, 15947, 512, 1.5, 
Decimal('0')) from (pid=32242) _check_capacity_exceeded 
/opt/stack/nova/nova/objects/resource_provider.py:706
2016-09-06 12:12:10.980 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 1, 15947, 512, 1.5, 
Decimal('11')) from (pid=32242) _check_capacity_exceeded 
/opt/stack/nova/nova/objects/resource_provider.py:706
2016-09-06 12:12:10.981 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 1, 15947, 512, 1.5, 
Decimal('704')) from (pid=32242) _check_capacity_exceeded 
/opt/stack/nova/nova/objects/resource_provider.py:706
2016-09-06 12:12:10.981 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 2, 218, 0, 1.0, Decimal('0')) 
from (pid=32242) _check_capacity_exceeded 
/opt/stack/nova/nova/objects/resource_provider.py:706
2016-09-06 12:12:10.982 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 2, 218, 0, 1.0, 
Decimal('11')) from (pid=32242) _check_capacity_exceeded 
/opt/stack/nova/nova/objects/resource_provider.py:706
2016-09-06 12:12:10.983 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 2, 218, 0, 1.0, 
Decimal('704')) from (pid=32242) _check_capacity_exceeded 
/opt/stack/nova/nova/objects/resource_provider.py:706
2016-09-06 12:12:10.983 WARNING nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Attempting to 
allocate 1 for VCPU. Currently using 704, amount available 64.0

The Decimal allocation for memory ('704') is reported here against CPU
and Disk resources in addition to Memory. Depending on the order rows
are returned, these get squashed in a dict later, and we end up with the
wrong usage record for CPU.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620736

Title:
  Incorrect SQL in placement API causes spurious
  InvalidAllocationCapacityExceeded error

Status in OpenStack Compute (nova):
  New

Bug description:
  Upstream master

  The SQL that joins the allocations to the inventory for
  _check_capacity_exceeded was incorrect. It was doing a left outer join
  with the allocations which meant we got resource accounting in an NxN
  matrix with all inventory.

  
   1
   2
   3
   4
   5
   6
   7
   8
   9
  10
  2016-09-06 12:12:10.978 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 0, 4, 0, 16.0, Decimal('0')) 
from (pid=32242) _check_capacity_exceeded 
/opt/stack/nova/nova/objects/resource_provider.py:706
  2016-09-06 12:12:10.979 DEBUG nova.objects.resource_provider 
[req-299c61e7-fc99-4cdc-b633-dc20d5886367 placement service] Allocation Record: 
(1, u'62b97cd4-5dc8-45d9-89ad-988273895635', 324, 0, 4, 0, 16.0, Decimal('11')) 
from (pid=32242) _check_capacity_exceeded 

[Yahoo-eng-team] [Bug 1620722] [NEW] @property methods in Managers are cached

2016-09-06 Thread Lance Bragstad
Public bug reported:

When working on the credential encryption spec, I found that one of the
@property methods in the implementation was having it's value cached.
Typical @property methods should be run every time they are called. This
was not the case in the credential encryption implementation because we
override the __getattr__ method in our base Manager class [0].

We should modify that method so that @property methods can be used when
inheriting from the common Manager.

[0]
https://github.com/openstack/keystone/blob/b47f10290ed83415149f3d2ab6b0dc64646e578a/keystone/common/manager.py#L185-L189

** Affects: keystone
 Importance: Undecided
 Assignee: David Stanek (dstanek)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1620722

Title:
  @property methods in Managers are cached

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  When working on the credential encryption spec, I found that one of
  the @property methods in the implementation was having it's value
  cached. Typical @property methods should be run every time they are
  called. This was not the case in the credential encryption
  implementation because we override the __getattr__ method in our base
  Manager class [0].

  We should modify that method so that @property methods can be used
  when inheriting from the common Manager.

  [0]
  
https://github.com/openstack/keystone/blob/b47f10290ed83415149f3d2ab6b0dc64646e578a/keystone/common/manager.py#L185-L189

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1620722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599936] Re: l2gw provider config prevents *aas provider config from being loaded

2016-09-06 Thread Ihar Hrachyshka
I don't think it's a bug in neutron. devstack should switch to loading
service providers by explicitly passing --config-file to
neutron_*aas.conf files. At which point l2gw will be able to pass their
own file without breaking neutron service provider loading. The patches
for the bug would span devstack and l2gw devstack plugin, but not
neutron.

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599936

Title:
  l2gw provider config prevents *aas provider config from being loaded

Status in devstack:
  New
Status in networking-l2gw:
  New
Status in neutron:
  Invalid

Bug description:
  networking-l2gw devstack plugin stores its service_providers config in 
/etc/l2gw_plugin.ini
  and add --config-file for it.
  as a result, neutron-server is invoked like the following.

  /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/plugins/midonet/midonet.ini --config-file
  /etc/neutron/l2gw_plugin.ini

  it breaks *aas service providers because NeutronModule.service_providers finds
  l2gw providers in cfg.CONF.service_providers.service_provider and thus 
doesn't look
  at *aas service_providers config, which is in /etc/neutron/neutron_*aas.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1599936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620692] [NEW] Debug logging in scheduler kills performance

2016-09-06 Thread Ryan Rossiter
Public bug reported:

When using the filter_scheduler with 400 hosts (not a very large number)
and debug logging turned on, scheduling times start taking a very long
time. With debug logging on, select_destinations() can swing anywhere
between 3 and 18 seconds. With debug logging off, select_destinations()
takes 0-4 seconds (http://paste.openstack.org/show/566153/).

The main problem is
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L162-L178
because in a 400 host environment, it's trying to log debug 1600 times
on every instance boot.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620692

Title:
  Debug logging in scheduler kills performance

Status in OpenStack Compute (nova):
  New

Bug description:
  When using the filter_scheduler with 400 hosts (not a very large
  number) and debug logging turned on, scheduling times start taking a
  very long time. With debug logging on, select_destinations() can swing
  anywhere between 3 and 18 seconds. With debug logging off,
  select_destinations() takes 0-4 seconds
  (http://paste.openstack.org/show/566153/).

  The main problem is
  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L162-L178
  because in a 400 host environment, it's trying to log debug 1600 times
  on every instance boot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1620692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620684] [NEW] nova list --status soft_deleted is not showing soft deleted Instances

2016-09-06 Thread Anusha Unnam
Public bug reported:

Steps to reproduce:

1. Set reclaim_instance_interval to a value in nova.conf
2. Boot an instance.
3. delete the instance(instance will be soft_deleted)
4. nova list --status soft_deleted

Expected result:
should display the soft_deleted instances based on the 
reclaim_instance_interval.

Actual result:
No instances are displayed.

This bug is reported in the admin context.

Environment:
current master devstack

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620684

Title:
  nova list --status soft_deleted is not showing soft deleted Instances

Status in OpenStack Compute (nova):
  New

Bug description:
  Steps to reproduce:

  1. Set reclaim_instance_interval to a value in nova.conf
  2. Boot an instance.
  3. delete the instance(instance will be soft_deleted)
  4. nova list --status soft_deleted

  Expected result:
  should display the soft_deleted instances based on the 
reclaim_instance_interval.

  Actual result:
  No instances are displayed.

  This bug is reported in the admin context.

  Environment:
  current master devstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1620684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605804] Re: Instance creation sometimes fails after host aggregate deletion

2016-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/352344
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f0dd4d6bdd286ea155cf55eb62662993577d8892
Submitter: Jenkins
Branch:master

commit f0dd4d6bdd286ea155cf55eb62662993577d8892
Author: Markus Zoeller 
Date:   Mon Aug 8 12:46:43 2016 +0200

Fix corrupt "host_aggregates_map" in host_manager

A host can be in multiple host-aggregates at the same time. When a
host gets removed from an aggregate in thread A and this aggregate
gets deleted in thread B, there can be a race-condition where the
mapping data in the host_manager can get out of sync for a moment.

This change simulates this condition in a unit test and fixes the bug
by iterating over the mapping itself instead of the out-of-date list
"aggregates.hosts".

Closes-Bug: 1605804
Change-Id: I59861f03f0c681f7118782fb017af377e07552aa


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1605804

Title:
  Instance creation sometimes fails after host aggregate deletion

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Instance creation starts failing if nova scheduler gets in an inconsistent 
state wrt host aggregates. If remove_host_from_aggregate operation is invoked 
for multiple hosts in quick succession, followed by aggregate deletion, the 
nova scheduler host_manager maps (host_aggregates_map and aggs_by_id) get out 
of sync, as there are some stale references left behind in the 
host_aggregates_map for an aggregate that is deleted from the aggs_by_id map. 
  This is because it cleans up state based on aggregate.hosts which is empty 
when aggregate is deleted, but the prior aggregate updates to remove individual 
hosts could have incorrect list of hosts added to the host_aggregates_map.

  Instance creation fails with below error once scheduler gets in this state:
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher 
[req-7f29701b-0272-444c-8650-a1035777e642 d2c755daa21e451e86c1d2b5be705aa2 
0546d7f9c747456aa0ffb306cfe5627d - - -] Exception during message handling: 1
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/pf9/nova/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 138, in _dispatch_and_reply
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/pf9/nova/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 183, in _dispatch
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/pf9/nova/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 127, in _do_dispatch
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/pf9/nova/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
150, in inner
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/pf9/nova/lib/python2.7/site-packages/nova/scheduler/manager.py", line 84, 
in select_destinations
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher 
filter_properties)
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/pf9/nova/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 72, in select_destinations
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher 
filter_properties)
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/pf9/nova/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 164, in _schedule
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher hosts = 
self._get_all_host_states(elevated)
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/pf9/nova/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 222, in _get_all_host_states
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher return 
self.host_manager.get_all_host_states(context)
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/pf9/nova/lib/python2.7/site-packages/nova/scheduler/host_manager.py", 
line 585, in get_all_host_states
  2016-07-21 18:20:16.780 15692 ERROR oslo_messaging.rpc.dispatcher 

[Yahoo-eng-team] [Bug 1604662] Re: Bulk creation for security group returns 500 error.

2016-09-06 Thread Reedip
Akihiro San,
As per our earlier discussion, as NeutronClient may be deprecated in Octa, 
therefore there is no point of participating in NeutronClient.
For OpenstackClient as well, as you said there is no reason.
And for SDK, I will create a BP to track Bulk Create/Delete support

** Changed in: python-neutronclient
 Assignee: Reedip (reedip-banerjee) => (unassigned)

** Changed in: python-neutronclient
   Status: New => Invalid

** Changed in: python-openstackclient
 Assignee: Reedip (reedip-banerjee) => (unassigned)

** Changed in: python-openstackclient
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604662

Title:
  Bulk creation for security group returns 500 error.

Status in neutron:
  In Progress
Status in python-neutronclient:
  Invalid
Status in python-openstackclient:
  Invalid

Bug description:
  
  API request
  
  vagrant@ubuntu:~$ curl -i -X POST -H "X-Auth-Token: $TOKEN" 
http://192.168.122.139:9696/v2.0/security-groups -d 
'{"security_groups":[{"security_group":{"name":"hobo1"}}]}'
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json
  Content-Length: 150
  X-Openstack-Request-Id: req-48d5282e-f0b6-48b8-887c-7aa0c953ee88
  Date: Wed, 20 Jul 2016 03:54:06 GMT

  {"NeutronError": {"message": "Request Failed: internal server error
  while processing your request.", "type": "HTTPInternalServerError",
  "detail": ""}}

  trace in neutron server
  ===
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
[req-48d5282e-f0b6-48b8-887c-7aa0c953ee88 e01bc3eadeb045edb02fc6b2af4b5d49 
867929bfedca4a719e17a7f3293845de -
   - -] create failed: No details.
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 401, in create
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 500, in _create
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource objs = 
do_create(body, bulk=True)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 496, in do_create
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
request.context, reservation.reservation_id)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 489, in do_create
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource return 
obj_creator(request.context, **kwargs)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource TypeError: 
create_security_group_bulk() got an unexpected keyword argument 
'security_groups'
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource
  2016-07-20 12:54:06.241 5351 INFO neutron.wsgi 
[req-48d5282e-f0b6-48b8-887c-7aa0c953ee88 e01bc3eadeb045edb02fc6b2af4b5d49 

[Yahoo-eng-team] [Bug 1587698] Re: Improve reliability of gate's npm-run-test

2016-09-06 Thread Rob Cresswell
I think we can close this now. It doesn't appear to be a current issue.

** Changed in: horizon
Milestone: None => newton-rc1

** Changed in: horizon
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1587698

Title:
  Improve reliability of gate's npm-run-test

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The npm-run-test job in the gate has shown itself to be less reliable
  lately.  In particular it tends to hang until the timeout, after
  successfully completing the tests.

  There have been similar failures in the past due to a variety of
  reasons.  Most likely there's a memory problem with the reloading of
  modules over and over again in Chrome.  One factor that had affected
  this was the loading of modules that were too 'high' in the hierarchy.
  For example, instead of using 'horizon.app.core.images' a test would
  just load 'horizon.app.core' which would load ALL dependent modules,
  then have to destroy them.

  Ideally we localize the tests loading only the modules needed.

  Other options for assisting with the tests would be to reduce the
  number of dependencies within app.core, such as moving resource
  registrations out since they are not really needed as part of the core
  registrations (where common features are placed, esp. APIs).

  This bug will remain in effect until npm-run-test appears to be fully
  stabilized.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1587698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619690] Re: request logging in placement api always logs success

2016-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/365015
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=345febe3216a9cb3eb16194c8df981a116a4f9d8
Submitter: Jenkins
Branch:master

commit 345febe3216a9cb3eb16194c8df981a116a4f9d8
Author: Chris Dent 
Date:   Fri Sep 2 14:39:31 2016 +

Move placement api request logging to middleware

This change moves the request logging in the placement api to
middleware that is the outermost piece of middleware in the system.

Without this we end up with a situation where some requests which
are not successful appear to be logged with success and other
request do not get logged at all.

By using middleware we assure that the logging of the beginning of
the request and _any_ exit of the request will be logged because the
middleware can be the first and last thing the request interacts
with.

Change-Id: I4215cc69cedae5637102b75e0b54fd26acb1826c
Closes-Bug: #1619690


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619690

Title:
  request logging in placement api always logs success

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The request logging in the placement api will always log a status of
  200, even when that's not the case because it it getting status from
  the wrong place. A possible fix is to raise the logging up a level to
  middleware where it can access the response status more directly
  (after exceptions).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619675] Re: Glance does not return image MD5

2016-09-06 Thread Brian Curtin
Hey, sorry for not getting back to you quicker. The log shows that on
that particular request, the server's response doesn't include the MD5,
so I think Ian is right that this is more of a Glance problem than it is
an SDK problem. I'm going to add Glance in here so perhaps they can take
a look.

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: python-openstacksdk
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1619675

Title:
  Glance does not return image MD5

Status in Glance:
  New
Status in OpenStack SDK:
  Invalid

Bug description:
  I'm trying to download an OpenStack image from glance using only the
  Openstack Python SDK, but I only get this error:

  Traceback (most recent call last):
File "/home/openstack/discovery/discovery.py", line 222, in 
  main(sys.argv[1:])
File "/home/openstack/discovery/discovery.py", line 117, in main
  image_service.download_image(image)
File "/usr/local/lib/python2.7/dist-packages/openstack/image/v2/_proxy.py", 
line 72, in download_image
  return image.download(self.session)
File "/usr/local/lib/python2.7/dist-packages/openstack/image/v2/image.py", 
line 166, in download
  checksum = resp.headers["Content-MD5"]
File "/usr/local/lib/python2.7/dist-packages/requests/structures.py", line 
54, in __getitem__
  return self._store[key.lower()][1]
  KeyError: 'content-md5'

  The weird part is that if I run the code using an IDE (PyCharm with
  remote debug) or as a script (python script.py -i ...) I get the
  error, but if I run each line using a python interpreter
  (ipython/python) the error does not happen! Have no idea why.

  Here is the code I'm using:

  ...
  image_name = node.name + "_" + time.strftime("%Y-%m-%d_%H-%M-%S")
  print "Getting data from", node.name
  compute_service.create_server_image(node, image_name)
  image = image_service.find_image(image_name)
  image_service.wait_for_status(image, 'active')
  fileName = "%s.img" % image.name

  with open(str(fileName), 'w+') as imgFile:
  imgFile.write(image.download(conn.image.session))
  ...

  This code ends up calling the API in this file
  /usr/local/lib/python2.7/dist-packages/openstack/image/v2/image.py,
  with this method:

  def download(self, session):
  """Download the data contained in an image"""
  # TODO(briancurtin): This method should probably offload the get
  # operation into another thread or something of that nature.
  url = utils.urljoin(self.base_path, self.id, 'file')
  resp = session.get(url, endpoint_filter=self.service)

  checksum = resp.headers["Content-MD5"]
  digest = hashlib.md5(resp.content).hexdigest()
  if digest != checksum:
  raise exceptions.InvalidResponse("checksum mismatch")

  return resp.content

  The resp.headers variable has no key "Content-MD5". This is the value
  I found for it:

  {'Date': 'Thu, 01 Sep 2016 20:17:01 GMT', 'Transfer-Encoding': 'chunked', 
   'Connection': 'keep-alive', 'Content-Type': 'application/octet-stream', 
   'X-Openstack-Request-Id': 'req-9eb16897-1398-4ab2-9cd4-45706e92819c'}

  But according to the REST API documentationm the response should
  return with the key Content-MD5: http://developer.openstack.org/api-
  ref/image/v2/?expanded=download-binary-image-data-detail

  If I just comment the MD5 check the download works fine, but this is
  inside the SDK so I can't/shouldn't change it. Anyone have any
  suggestion on how to achieve this using the OpenStack Python SDK? Is
  this an SDK bug?

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1619675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605026] Re: Error message is not hiding automatically if we select wrong date in compute's overview page

2016-09-06 Thread Rob Cresswell
This is by design, not a bug. It can be customised in local settings
anyway.

** Changed in: horizon
   Status: In Progress => Won't Fix

** Changed in: horizon
 Assignee: surekha (surekha23) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1605026

Title:
  Error message is not hiding automatically  if we select wrong date in
  compute's overview page

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Steps to reproduce.
  Step1 : Click through Horizon->project->compute->coverview
  Step2 : Select from date greater than the to date and click search button
  Step3 : navigate to other page and come back to overview page. will see the 
error message still persist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1605026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615763] Re: CommandError: An error occurred during rendering

2016-09-06 Thread Rob Cresswell
''

should be

''

This is because legacy versions had the 'ui/' directory, but later
versions dropped it. We actually have a workaround in Horizon for this
at
https://github.com/openstack/horizon/blob/348069364cf217217af6436e455ee04587bfd26b/openstack_dashboard/utils/settings.py#L243

My advice would just be to add the original compress line back to your
template, and drop the 'ui/' part. You can see where the files are by
pulling Horizon, running 'python manage.py collectstatic' and viewing
the 'static/' dir in your horizon root.

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615763

Title:
  CommandError: An error occurred during rendering

Status in congress:
  New
Status in devstack:
  New
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Repeatedly seeing the following error during devstack setup in tempest
  tests in Congress (for example:
  http://logs.openstack.org/57/356157/4/check/gate-congress-dsvm-
  api/70474ca/logs/devstack-early.txt.gz [also attached]). Any idea
  whether it's a horizon bug or devstack bug or user error on Congress
  side? Thanks so much!

  2016-08-22 04:09:07.389 | 1658 static files copied to 
'/opt/stack/new/horizon/static'.
  2016-08-22 04:09:09.017 | Found 'compress' tags in:
  2016-08-22 04:09:09.017 | 
/opt/stack/new/horizon/openstack_dashboard/templates/horizon/_conf.html
  2016-08-22 04:09:09.017 | 
/opt/stack/new/horizon/openstack_dashboard/templates/horizon/_scripts.html
  2016-08-22 04:09:09.017 | 
/opt/stack/new/congress/congress_dashboard/templates/admin/base.html
  2016-08-22 04:09:09.017 | 
/opt/stack/new/horizon/openstack_dashboard/templates/_stylesheets.html
  2016-08-22 04:09:09.017 | 
/opt/stack/new/congress/congress_dashboard/templates/admin/_scripts.html
  2016-08-22 04:09:09.875 | Compressing... CommandError: An error occurred 
during rendering 
/opt/stack/new/congress/congress_dashboard/templates/admin/base.html: 
'horizon/lib/jquery-ui/ui/jquery-ui.css' could not be found in the 
COMPRESS_ROOT '/opt/stack/new/horizon/static' or with staticfiles.
  2016-08-22 04:09:09.957 | exit_trap: cleaning up child processes
  2016-08-22 04:09:09.957 | ./stack.sh: line 486: kill: (15777) - No such 
process+ unset GREP_OPTIONS

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1615763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620587] [NEW] ml2_conf.ini contains oslo.log options

2016-09-06 Thread Thomas Bechtold
Public bug reported:

When running neutron-server or one of the agents, neutron.conf is
usually included which already contains the oslo.log options in the
[DEFAULT] section. There's no need to add the options again to the
ml2_conf.ini

** Affects: neutron
 Importance: Undecided
 Assignee: Thomas Bechtold (toabctl)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620587

Title:
  ml2_conf.ini contains oslo.log options

Status in neutron:
  In Progress

Bug description:
  When running neutron-server or one of the agents, neutron.conf is
  usually included which already contains the oslo.log options in the
  [DEFAULT] section. There's no need to add the options again to the
  ml2_conf.ini

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >