[Yahoo-eng-team] [Bug 1504465] [NEW] neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors failed to clean up loadbalancer

2015-10-09 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/15/229915/3/gate/gate-neutron-lbaasv2-dsvm-
minimal/5dc60be/logs/testr_results.html.gz

ft1.2: tearDownClass 
(neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors)_StringException:
 Traceback (most recent call last):
  File "neutron_lbaas/tests/tempest/lib/test.py", line 310, in tearDownClass
six.reraise(etype, value, trace)
  File "neutron_lbaas/tests/tempest/lib/test.py", line 293, in tearDownClass
teardown()
  File "neutron_lbaas/tests/tempest/v2/api/base.py", line 96, in 
resource_cleanup
cls._try_delete_resource(cls._delete_load_balancer, lb_id)
  File "neutron_lbaas/tests/tempest/v1/api/base.py", line 185, in 
_try_delete_resource
delete_callable(*args, **kwargs)
  File "neutron_lbaas/tests/tempest/v2/api/base.py", line 137, in 
_delete_load_balancer
load_balancer_id, delete=True)
  File "neutron_lbaas/tests/tempest/v2/api/base.py", line 160, in 
_wait_for_load_balancer_status
load_balancer_id)
  File "neutron_lbaas/tests/tempest/v2/clients/load_balancers_client.py", line 
42, in get_load_balancer
resp, body = self.get(url)
  File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 274, in get
return self.request('GET', url, extra_headers, headers)
  File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 646, in request
resp, resp_body)
  File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 760, in _error_checker
message=message)
tempest_lib.exceptions.ServerFault: Got server fault
Details: Request Failed: internal server error while processing your request.


Server failure is:

2015-10-08 23:22:56.409 ERROR neutron.api.v2.resource 
[req-fafd7f88-2e1a-41ce-85c1-9dbacc6f1d93 TestHealthMonitors-867400801 
TestHealthMonitors-1196952833] show failed
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 359, in show
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource 
parent_id=parent_id),
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 311, in _item
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource obj = 
obj_getter(request.context, id, **kwargs)
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/plugin.py", 
line 560, in get_loadbalancer
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource return 
self.db.get_loadbalancer(context, id).to_api_dict()
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py",
 line 268, in get_loadbalancer
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource lb_db = 
self._get_resource(context, models.LoadBalancer, id)
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py",
 line 73, in _get_resource
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource 
context.session.refresh(resource)
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1344, 
in refresh
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource 
instance_str(instance))
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource 
InvalidRequestError: Could not refresh instance ''
2015-10-08 23:22:56.409 13383 ERROR neutron.api.v2.resource

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: db gate-failure lbaas

** Tags added: db gate-failure lbaas

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504465

Title:
  
neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors
  failed to clean up loadbalancer

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/15/229915/3/gate/gate-neutron-lbaasv2-dsvm-
  minimal/5dc60be/logs/testr_results.html.gz

  ft1.2: tearDownClass 
(neutron_lbaas.tests.tempest.v2.api.test_health_monitors_non_admin.TestHealthMonitors)_StringException:
 Traceback (most recent call last):
File "neutron_lbaas/tests/tempest/lib/test.py", line 310, in tearDownClass
  

[Yahoo-eng-team] [Bug 1504466] [NEW] horizon can't specify network types for midonet

2015-10-09 Thread YAMAMOTO Takashi
Public bug reported:

midonet uses network types like "midonet" and "uplink".
there's no way to create networks with those types via horizon.
ideally horizon should not have the list of types hardcoded.
(PROVIDER_TYPES and other places like _clean_segmentation_id)

** Affects: horizon
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1504466

Title:
  horizon can't specify network types for midonet

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  midonet uses network types like "midonet" and "uplink".
  there's no way to create networks with those types via horizon.
  ideally horizon should not have the list of types hardcoded.
  (PROVIDER_TYPES and other places like _clean_segmentation_id)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1504466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504477] [NEW] Remove redundant neutron.agent.linux.utils:replace_file()

2015-10-09 Thread Bogdan Tabor
Public bug reported:

neutron.agent.linux.utils:replace_file() and
neutron.common.utils:replace_file() have same functionality

** Affects: neutron
 Importance: Undecided
 Assignee: Bogdan Tabor (bogdan-tabor)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504477

Title:
  Remove redundant neutron.agent.linux.utils:replace_file()

Status in neutron:
  In Progress

Bug description:
  neutron.agent.linux.utils:replace_file() and
  neutron.common.utils:replace_file() have same functionality

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504477/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504534] Re: o.vo master + nova master = unit tests fail

2015-10-09 Thread Davanum Srinivas (DIMS)
Fixed in https://review.openstack.org/#/c/233165/

** Also affects: oslo.versionedobjects
   Importance: Undecided
   Status: New

** Changed in: oslo.versionedobjects
   Status: New => Fix Committed

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504534

Title:
  o.vo master + nova master = unit tests fail

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.versionedobjects:
  Fix Committed

Bug description:
  There's a whole bunch of test failures:
  http://paste.openstack.org/show/475867/

  Here's one example if the paste vanishes for some reason:
  
nova.tests.unit.objects.test_instance.TestRemoteInstanceListObject.test_get_hung_in_rebooting
  
-

  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File "nova/tests/unit/objects/test_instance.py", line 1597, in 
test_get_hung_in_rebooting
  self.assertIsInstance(inst_list.objects[i], objects.Instance)
    File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 401, in assertIsInstance
  self.assertThat(obj, matcher, msg)
    File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 
'Instance(access_ip_v4=1.2.3.4,access_ip_v6=::1,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive=None,created_at=None,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description=None,display_name=None,ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=,host='fake-host',hostname=None,id=2,image_ref=None,info_cache=,instance_type_id=None,kernel_id=None,key_data=None,key_name=None,launch_index=None,launched_at=1955-11-12T22:04:00Z,launched_on=None,locked=False,locked_by=None,memory_mb=None,metadata=,migration_context=,new_flavor=,node=None,numa_topology=,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=None,progress=None,project_id='fake-project',ramdisk_id=None,reservation_id=None,root_device_name=None,root_gb=0,scheduled_at=,security_groups=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,updated_at=None,user_data=None,user_id='fake-user',uuid=c2169c75-2912-4a72-8df9-e3faa5f16578,vcpu_model=,vcpus=None,vm_mode=None,vm_state=None)'
 is not an instance of InstanceV2

  @bauwser commented on IRC

  bauwser dimsum__: oh man, I see 38 hits on isinstance([^,]+, objects.Instance)
  bauwser dimsum__: which means all of them need to be changed
  bauwser dimsum__: to be isinstance(obj, instance_obj._BaseInstance)

  
  What could be changed is in http://paste.openstack.org/show/475871/ (bauzas - 
2015/10/09)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1504534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504666] [NEW] RemoteError not properly caught during live migraion

2015-10-09 Thread Lauren Taylor
Public bug reported:

API fails during live migration with a 500 internal server error.

https://127.0.0.1:8774/v2/8c87f173ba7c47cbb4f57eebe85479c1/servers/d53b954a-7323-4d88-a5fc-14c0672a704e/action
{
"os-migrateLive": {
"host": "8231E2D_109EFCT",
"block_migration": false,
"disk_over_commit": false
}
}

The correct error should be 400 BadRequest as the error raise should be
RemoteError, not a MigrationError

Nova-api logs:

MigrationError(u'Migration error: Remote error:error message)
[u'Traceback (most recent call last):\n', u'  
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply\nexecutor_callback))\n', u'  
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch\nexecutor_callback)\n', u'  
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  
File "/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in 
wrapped\npayload)\n', u'  
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  
File "/usr/lib/python2.7/site-packages/nova/exception.py", line 72, in 
wrapped\nreturn f(self, context, *args, **kw)\n', u'  
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 352, in 
decorated_function\nLOG.warning(msg, e, instance=instance)\n', u'  
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 325, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
 
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  
File "/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in 
wrapped\npayload)\n', u'  
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  
File "/usr/lib/python2.7/site-packages/nova/exception.py", line 72, in 
wrapped\nreturn f(self, context, *args, **kw)\n', u'  
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 402, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 380, in 
decorated_function\nkwargs[\'instance\'], e, sys.exc_info())\n', u'  
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 368, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5023, in 
check_can_live_migrate_destination\nblock_migration, disk_over_commit)\n', 
u'  
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'   
raise exception.MigrationError(reason=six.text_type(ex))\n'

** Affects: nova
 Importance: Undecided
 Assignee: Lauren Taylor (lmtaylor)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Lauren Taylor (lmtaylor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504666

Title:
  RemoteError not properly caught during live migraion

Status in OpenStack Compute (nova):
  New

Bug description:
  API fails during live migration with a 500 internal server error.

  
https://127.0.0.1:8774/v2/8c87f173ba7c47cbb4f57eebe85479c1/servers/d53b954a-7323-4d88-a5fc-14c0672a704e/action
  {
  "os-migrateLive": {
  "host": "8231E2D_109EFCT",
  "block_migration": false,
  "disk_over_commit": false
  }
  }

  The correct error should be 400 BadRequest as the error raise should
  be RemoteError, not a MigrationError

  Nova-api logs:

  MigrationError(u'Migration error: Remote error:error message)
  [u'Traceback (most recent call last):\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 142, in _dispatch_and_reply\nexecutor_callback))\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 186, in _dispatch\nexecutor_callback)\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 129, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in 
wrapped\npayload)\n', u'  
  File 

[Yahoo-eng-team] [Bug 1296953] Re: --disable-snat on tenant router raises 404

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => New

** Changed in: neutron
 Assignee: Sridhar Gaddam (sridhargaddam) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1296953

Title:
  --disable-snat on tenant router raises 404

Status in neutron:
  New

Bug description:
  arosen@arosen-desktop:~/devstack$ neutron router-create aaa
  nCreated a new router:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | distributed   | False|
  | external_gateway_info |  |
  | id| add4d46b-5036-4a96-af7e-8ceb44f9ab3d |
  | name  | aaa  |
  | routes|  |
  | status| ACTIVE   |
  | tenant_id | 4ec9de7eae7445719e8f67f2f9d78aae |
  +---+--+
  arosen@arosen-desktop:~/devstack$ neutron router-gateway-set --disable-snat  
aaa public 
  The resource could not be found.

  
  2014-03-24 14:06:12.444 DEBUG neutron.policy 
[req-19762248-9964-4ad3-9ce9-de68d4cc4e49 demo 
4ec9de7eae7445719e8f67f2f9d78aae] Failed policy check for 'update_router' from 
(pid=7068) enforce /opt/stack/neutron/neutron/policy.py:381
  2014-03-24 14:06:12.444 ERROR neutron.api.v2.resource 
[req-19762248-9964-4ad3-9ce9-de68d4cc4e49 demo 
4ec9de7eae7445719e8f67f2f9d78aae] update failed
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 87, in resource
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 494, in update
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPNotFound(msg)
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource HTTPNotFound: The 
resource could not be found.
  2014-03-24 14:06:12.444 TRACE neutron.api.v2.resource 
  2014-03-24 14:06:12.445 INFO neutron.wsgi 
[req-19762248-9964-4ad3-9ce9-de68d4cc4e49 demo 
4ec9de7eae7445719e8f67f2f9d78aae] 10.24.114.91 - - [24/Mar/2014 14:06:12] "PUT 
/v2.0/routers/add4d46b-5036-4a96-af7e-8ceb44f9ab3d.json HTTP/1.1" 404 248 
0.039626


  In the code we do:

  try:
  policy.enforce(request.context,
 action,
 orig_obj)
  except exceptions.PolicyNotAuthorized:
  # To avoid giving away information, pretend that it
  # doesn't exist
  msg = _('The resource could not be found.')
  raise webob.exc.HTTPNotFound(msg)   


  it would be nice if we were smarter about this an raise not authorized
  instead of not found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1296953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1318528] Re: DHCP agent creates new instance of driver for each action

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1318528

Title:
  DHCP agent creates new instance of driver for each action

Status in neutron:
  New

Bug description:
  Working on rootwrap daemon [0] I've found out that DCHP agent asks for
  root_helper too often. [1] shows traceback for each place where
  get_root_helper is being called.

  It appeared that in [2] DHCP agent creates an instance of driver class
  for every single action it needs to run. That involves both lots of
  initialization code and very expensive dynamic import_object routine
  being run.

  [2] shows that the only thing that changes between driver instances is
  a network. I suggest we make network an argument for every action
  instead to avoid expensive dynamic driver instantiation.

  Links:

  [0] https://review.openstack.org/84667
  [1] 
http://logs.openstack.org/67/84667/20/check/check-tempest-dsvm-neutron/3a7768e/logs/screen-q-dhcp.txt.gz?level=INFO
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp_agent.py#L122

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1318528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504680] [NEW] support default dns servers for subnet creation

2015-10-09 Thread Eric Peterson
Public bug reported:

When creating a subnet, users currently have a blank screen presented to
them for dns servers.

For our deployment, we would like to have a configuration setting where
we can provide some default dns servers to use.

This seems like a common need, for operators to ask that most users use
a local dns server as a default setting for new subnets.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1504680

Title:
  support default dns servers for subnet creation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a subnet, users currently have a blank screen presented
  to them for dns servers.

  For our deployment, we would like to have a configuration setting
  where we can provide some default dns servers to use.

  This seems like a common need, for operators to ask that most users
  use a local dns server as a default setting for new subnets.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1504680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504411] [NEW] Firewall-policy-update and firewall-update do not show all updatable options in ´help´

2015-10-09 Thread Reedip
Public bug reported:

Firewall-policy-update and firewall-update do not show all the options which 
can be updated by the NeutronClient CLI.
Actual output:
Firewall Policy Update has the following arguments:
usage: neutron firewall-policy-update [-h] [--request-format {json,xml}]
  [--firewall-rules FIREWALL_RULES]
  FIREWALL_POLICY

Expected output:
It is missing :
- shared
- audited
- description

Actual output:
Firewall Update has the following arguments:
usage: neutron firewall-update [-h] [--request-format {json,xml}]
   [--policy POLICY]
   [--router ROUTER | --no-routers]
   FIREWALL

Expected output:
It is missing:
- admin_state
- description
- name

Pre-conditions: Create a Firewall Rule. Create a Firewall Policy and a
Firewall. Then try to update the Policy and Firewall.

** Affects: python-neutronclient
 Importance: Undecided
 Assignee: Reedip (reedip-banerjee)
 Status: New

** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
 Assignee: (unassigned) => Reedip (reedip-banerjee)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504411

Title:
  Firewall-policy-update and firewall-update do not show all updatable
  options in ´help´

Status in python-neutronclient:
  New

Bug description:
  Firewall-policy-update and firewall-update do not show all the options which 
can be updated by the NeutronClient CLI.
  Actual output:
  Firewall Policy Update has the following arguments:
  usage: neutron firewall-policy-update [-h] [--request-format {json,xml}]
[--firewall-rules FIREWALL_RULES]
FIREWALL_POLICY

  Expected output:
  It is missing :
  - shared
  - audited
  - description

  Actual output:
  Firewall Update has the following arguments:
  usage: neutron firewall-update [-h] [--request-format {json,xml}]
 [--policy POLICY]
 [--router ROUTER | --no-routers]
 FIREWALL

  Expected output:
  It is missing:
  - admin_state
  - description
  - name

  Pre-conditions: Create a Firewall Rule. Create a Firewall Policy and a
  Firewall. Then try to update the Policy and Firewall.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1504411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504670] [NEW] Remove embrane plugin

2015-10-09 Thread Henry Gessau
Public bug reported:

The embrane plugin shall be removed after EOL.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504670

Title:
  Remove embrane plugin

Status in neutron:
  New

Bug description:
  The embrane plugin shall be removed after EOL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504686] [NEW] Keystone errors on token requests for users in recreated tenants when using memcache

2015-10-09 Thread Shawn Berger
Public bug reported:

With memcache set up for resource caching, when a tenant is created,
deleted, and recreated with the same name, users within that project get
intermittent errors when requesting tokens.

You can recreate this by having memcache with resource caching enabled.
Then create a tenant, delete it, and then recreate it making sure the
name is the same as the first one.  Then create a user in this tenant
and continually request tokens.  It will gradually start generating
tokens while also failing until the cache is cleaned out.

I believe the intermittent errors we experienced were due to our
environment having a memcache on each keystone node and having the
keystone nodes behind a load balancer.

As I ran this scenario, I was seeing more failures in the beginning and
then it gradually started having more successes until a little after the
cache expiration_time where I was seeing all successes.

We investigated and when this error was originally hit it threw 404 or
401s.  The 404s were complaining about not being able to find a certain
project, but when I tried to recreate I was receiving all 401s.

The 404 errors led me to believe that this was due to memcache not
marking cache entries as deleted.  Since, when running our tests we used
the name of the project and it would auto resolve the id.  So the entry
for the project name in the cache was conflicting with the entry in the
database, but once the cache is expired it isn't an issue.

So it seems that reusing names of projects causes problems with the
resolution of the project id when memcache is enabled.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1504686

Title:
  Keystone errors on token requests for users in recreated tenants when
  using memcache

Status in Keystone:
  New

Bug description:
  With memcache set up for resource caching, when a tenant is created,
  deleted, and recreated with the same name, users within that project
  get intermittent errors when requesting tokens.

  You can recreate this by having memcache with resource caching
  enabled.  Then create a tenant, delete it, and then recreate it making
  sure the name is the same as the first one.  Then create a user in
  this tenant and continually request tokens.  It will gradually start
  generating tokens while also failing until the cache is cleaned out.

  I believe the intermittent errors we experienced were due to our
  environment having a memcache on each keystone node and having the
  keystone nodes behind a load balancer.

  As I ran this scenario, I was seeing more failures in the beginning
  and then it gradually started having more successes until a little
  after the cache expiration_time where I was seeing all successes.

  We investigated and when this error was originally hit it threw 404 or
  401s.  The 404s were complaining about not being able to find a
  certain project, but when I tried to recreate I was receiving all
  401s.

  The 404 errors led me to believe that this was due to memcache not
  marking cache entries as deleted.  Since, when running our tests we
  used the name of the project and it would auto resolve the id.  So the
  entry for the project name in the cache was conflicting with the entry
  in the database, but once the cache is expired it isn't an issue.

  So it seems that reusing names of projects causes problems with the
  resolution of the project id when memcache is enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1504686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342961] Re: Exception during message handling: Pool FOO could not be found

2015-10-09 Thread Armando Migliaccio
Can you provide logstash query for this?

** Changed in: neutron
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342961

Title:
  Exception during message handling: Pool  FOO could not be found

Status in neutron:
  Incomplete

Bug description:
  $subjecyt style exception appears both in successful and failed jobs.

  message: "Exception during message handling" AND message:"Pool" AND
  message:"could not be found" AND filename:"logs/screen-q-svc.txt"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkV4Y2VwdGlvbiBkdXJpbmcgbWVzc2FnZSBoYW5kbGluZ1wiIEFORCBtZXNzYWdlOlwiUG9vbFwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNTUzMzU3ODE2NCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

[req-201dcd14-dc9d-4fb5-8eb5-c66c35991cb3 ] Exception during message 
handling: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/common/agent_driver_base.py",
 line 232, in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.update_pool_stats(context, pool_id, data=stats)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py", line 512, 
in update_pool_stats
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher pool_db 
= self._get_resource(context, Pool, pool_id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py", line 218, 
in _get_resource
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher raise 
loadbalancer.PoolNotFound(pool_id=id)
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher 
PoolNotFound: Pool 7fa63738-9030-4136-9b4e-9eb7ffb79f68 could not be found
  2014-07-16 17:31:46.780 31504 TRACE oslo.messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1342961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338885] Re: fwaas: admin should not be able to create firewall rule for non existing tenant

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
 Assignee: Mithil Arun (arun-mithil) => (unassigned)

** Changed in: neutron
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338885

Title:
  fwaas: admin should not be able to create firewall rule for non
  existing tenant

Status in neutron:
  Incomplete

Bug description:
   Admin should not be able to create resources for non existing tenant.


  Steps to Reproduce:

  Actual Results: 
   
  root@IGA-OSC:~# neutron firewall-rule-create --protocol tcp --action deny 
--tenant-id bf4fbb928d574829855ebfd9e5d0e -->(non existing tenant-id. changed 
the last few characters)
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  ++--+
  | action | deny |
  | description|  |
  | destination_ip_address |  |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | 7264e5a6-5752-4518-b26b-7c7395173747 |
  | ip_version | 4|
  | name   |  |
  | position   |  |
  | protocol   | tcp  |
  | shared | False|
  | source_ip_address  |  |
  | source_port|  |
  | tenant_id  | bf4fbb928d574829855ebfd9e5d0e|
  ++--+
  root@IGA-OSC:~# ktl
  +--+-+-+
  |id|   name  | enabled |
  +--+-+-+
  | 0ad385e00e97476e9456945c079a21ea |  admin  |   True  |
  | 43af7b7c0dbc40bd90d03cc08df201ce | service |   True  |
  | d9481c57a11c46eea62886938b5378a7 | tenant1 |   True  |
  | bf4fbb928d574829855ebfd9e5d0e58c | tenant2 |   True  |
  +--+-+-+
   
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325736] Re: Security Group Rules can only be specified in one direction

2015-10-09 Thread Armando Migliaccio
** No longer affects: neutron

** No longer affects: python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1325736

Title:
  Security Group Rules can only be specified in one direction

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  It might save users potentially a lot of time if instead of only
  offering an INGRESS and an EGRESS direction, if they could specify a
  BOTH direction. Whenever someone needs to enter both an ingress and
  egress rule for the same port they have to enter it twice, remembering
  all of the information they need (since it can't be cloned). If they
  forget to flip the direction the second time from the default value,
  it'll error out as a duplicate and they'll have to try a third time.
  If they messed up the second rule, there's no edit, so they would have
  to delete it if they got a value wrong and do it all over again.

  It would be awesome if the UI allowed for specifying both an ingress
  and egress rule at the same time, even if all it did was create the
  ingress and egress rows and put them in the table, at least they'd be
  guaranteed to have the same configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1325736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334798] Re: Gate test is masking failure details

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1334798

Title:
  Gate test is masking failure details

Status in neutron:
  Incomplete

Bug description:
  Both the 2.6 and 2.7 gate tests are failing, console log indicating
  two failures, but only showing one failure 'process-retcode'. When
  looking at testr_results it shows a fail and only mentions:

  ft1.12873:
  
neutron.tests.unit.services.vpn.service_drivers.test_ipsec.TestValidatorSelection.test_reference_driver_used_StringException

  Here is 2.7 logs: http://logs.openstack.org/51/102351/4/check/gate-
  neutron-python27/a757b36/

  Upon future investigation, it was found that there was non-printable
  characters in an oslo file. With manual testing, it shows the error:

  $ python -m neutron.openstack.common.lockutils python -m unittest 
neutron.tests.unit.services.vpn.service_drivers.test_ipsec.TestValidatorSelection.test_reference_driver_used
  F
  ==
  FAIL: test_reference_driver_used 
(neutron.tests.unit.services.vpn.service_drivers.test_ipsec.TestValidatorSelection)
  
neutron.tests.unit.services.vpn.service_drivers.test_ipsec.TestValidatorSelection.test_reference_driver_used
  --
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'

  traceback-1: {{{
  Traceback (most recent call last):
File "neutron/common/rpc.py", line 63, in cleanup
  assert TRANSPORT is not None
  AssertionError
  }}}

  Traceback (most recent call last):
File "neutron/tests/unit/services/vpn/service_drivers/test_ipsec.py", line 
52, in setUp
  super(TestValidatorSelection, self).setUp()
File "neutron/tests/base.py", line 188, in setUp
  n_rpc.init(CONF)
File "neutron/common/rpc.py", line 56, in init
  aliases=TRANSPORT_ALIASES)
File "/opt/stack/oslo.messaging/oslo/messaging/transport.py", line 185, in 
get_transport
  invoke_kwds=kwargs)
File "/opt/stack/stevedore/stevedore/driver.py", line 45, in __init__
  verify_requirements=verify_requirements,
File "/opt/stack/stevedore/stevedore/named.py", line 55, in __init__
  verify_requirements)
File "/opt/stack/stevedore/stevedore/extension.py", line 170, in 
_load_plugins
  self._on_load_failure_callback(self, ep, err)
File "/opt/stack/stevedore/stevedore/driver.py", line 50, in 
_default_on_load_failure
  raise err
File "/opt/stack/oslo.messaging/oslo/messaging/_drivers/impl_fake.py", line 
48
  SyntaxError: Non-ASCII character '\xc2' in file 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/impl_fake.py on line 48, but 
no encoding declared; see http://www.python.org/peps/pep-0263.html for details

  A fix will be done for the oslo error, but we need to investigate why
  the gate test does not show any information on the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1334798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334323] Re: Check ips availability before adding network to DHCP agent

2015-10-09 Thread Armando Migliaccio
Code has changed a lot since this was reported.

** Changed in: neutron
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1334323

Title:
  Check ips availability before adding network to DHCP agent

Status in neutron:
  Incomplete

Bug description:
  Hi there,

  How to reproduce ?
  ===

  First of all it's better to use HA DHCP agent, i.e. running more than
  one DHCP agent, and setting the dhcp_agents_per_network to the number
  of DHCP agent that you're running.

  Now for the sake of this example let's say that
  dhcp_agents_per_network=3.

  Now create a network with a subnet /30 for example or big subnet e..g
  /24 but with a smaller allocation pool e.g that contain only 1 or 2
  ips..

  
  What happen ?
  

  A lot of exception start showing up in the logs in the form:

 IpAddressGenerationFailure: No more IP addresses available on
  network

  
  What happen really ?
  

  Our small network was basically scheduled to all DHCP agents that are
  up and active and each one of them will try to create a port for him
  self, but because our small network has less IPs than
  dhcp_agents_per_network, then some of this port will fail to be
  created, and this will happen each iteration of the DHCP agent main
  loop.

  Another case where if you have more than one subnet in a network, and
  one of them is pretty small e.g.

  net1 -> subnet1 10.0.0.0/24
subnet2 10.10.0.0/30

  Than errors also start to happen in every iteration of the dhcp agent.

  What is expected ?
  ===

  IMHO only agent that can handle the network should hold this later,
  and a direct call to add a network to a DHCP agent should also fail if
  there is no IPs left to satisfy the new DHCP port creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1334323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332475] Re: neutron should give an error if we give Segmentation_id beyond specified range

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Incomplete

** Changed in: neutron
 Assignee: Puneet Arora (puneet-arora) => (unassigned)

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332475

Title:
  neutron should give an error if we give Segmentation_id beyond
  specified range

Status in neutron:
  Incomplete

Bug description:
  
  Problem in segment-Id range, it should belongs to given range,
  command to reproduce :

  neutron net-create demo_net --provider:network_type gre
  --provider:Segmentation_id 2000

  right now its allowing to create.

  expected:
  neutron should give an error if we give Segmentation_id beyond specified 
range.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327071] Re: neutron:Error not thrown when duplicate options are present while in a neutron CLI.

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327071

Title:
  neutron:Error not thrown when duplicate options are present while in a
  neutron CLI.

Status in neutron:
  Incomplete

Bug description:
  Duplicate option Error not thrown, when creating firewall rule with duplicate 
option of protocol field. however error is throwing for action field while 
updating the firewall rule.
  Steps to Reproduce: 
  create firewall rule by specifying the protocol field two times
  Actual Results: 
  root@IH-HL-OSC:~# fwru r1 --protocol tcp --protocol icmp
  Updated firewall_rule: r1
  root@IH-HL-OSC:~# fwrl
  
+--+--+--+--+-+
  | id   | name | firewall_policy_id
   | summary  | enabled |
  
+--+--+--+--+-+
  | 7fd12232-2fd2-4fbc-a70b-2e3479f93392 | r1   | 
e8f3f423-0e38-4f58-85de-2ec9559cefb9 | ICMP,| True|
  |  |  |   
   |  source: none(none), | |
  |  |  |   
   |  dest: none(22), | |
  |  |  |   
   |  allow   | |
  | c81dd745-b71d-4879-a16e-401d9e60d68d | r2   |   
   | TCP, | True|
  |  |  |   
   |  source: none(none), | |
  |  |  |   
   |  dest: none(none),   | |
  |  |  |   
   |  allow   | |
  
+--+--+--+--+-+
  root@IH-HL-OSC:~# fwrc --name r2 --protocol icmp --action allow --action deny
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  ++--+
  | action | deny |
  | description|  |
  | destination_ip_address |  |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | bcdbe24d-93ac-4d2e-889d-ad6f8f5a2b29 |
  | ip_version | 4|
  | name   | r2   |
  | position   |  |
  | protocol   | icmp |
  | shared | False|
  | source_ip_address  |  |
  | source_port|  |
  | tenant_id  | 8aac6cceec774dec8821d76e0c1bdd8c |
  ++--+
   
  root@IH-HL-OSC:~# fwrc --name r2 --protocol icmp --action allow --protocol tcp
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  ++--+
  | action | allow|
  | description|  |
  | destination_ip_address |  |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | af7de3ec-344c-44c0-98b2-bd7bc9db3d93 |
  | ip_version | 4|
  | name   | r2   |
  | position   |  |
  | protocol   | tcp  |
  | shared | False|
  | source_ip_address  |  |
  | source_port|  |
  | tenant_id  | 8aac6cceec774dec8821d76e0c1bdd8c |
  

[Yahoo-eng-team] [Bug 1338938] Re: dhcp scheduler should stop redundant agent

2015-10-09 Thread Armando Migliaccio
I don't understand the problem statement.

** Changed in: neutron
   Status: Opinion => Incomplete

** Changed in: neutron
 Assignee: Xurong Yang (idopra) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338938

Title:
  dhcp scheduler should stop redundant agent

Status in neutron:
  Incomplete

Bug description:
  we initiate the counter of dhcp agents between active host and
  cfg.CONF.dhcp_agents_per_network, suppose that we start dhcp agents
  correctly, then some dhcp agents are down(host down or kill the dhcp-
  agent), during this period, we will reschedule and recover the normal
  dhcp agents.  but when down dhcp agents restart, some dhcp agents are
  redundant.

  if len(dhcp_agents) >= agents_per_network:
  LOG.debug(_('Network %s is hosted already'),
network['id'])
  return

  IMO, we need stop the redundant agents  In above case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337717] Re: L2-population fanout-cast leads to performance and scalability issue

2015-10-09 Thread Armando Migliaccio
We'd need to come up with a backward compatible strategy to handle the
change in topic subscription. However this isn't just a problem for
l2pop.

** Changed in: neutron
   Status: Opinion => Confirmed

** Changed in: neutron
   Importance: Medium => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337717

Title:
  L2-population fanout-cast leads to performance and scalability issue

Status in neutron:
  Confirmed

Bug description:
  
https://github.com/osrg/quantum/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc.py

  def _notification_fanout(self, context, method, fdb_entries):
  
  self.fanout_cast(context,
   self.make_msg(method, fdb_entries=fdb_entries),
   topic=self.topic_l2pop_update)

  the fanout_cast will publish the message to all L2 agents listening
  "l2population" topic.

  If there are 1000 agents (it is a small cloud), and all of them are
  listening to  "l2population" topic, adding one new port will leads to
  1000 sub messages. Generally rabbitMQ can handle 10k messages per
  second, and the fanout_cast method will leads to greatly performance
  issues, and make the neutron service hard to scale, the concurrency of
  VM port request will be very very small.

  No matter how many ports in the subnet, the performance is up to the
  number of the L2 agents listening the topic.

  The way to solve the performance and scalability issue is to make the
  L2 agent listening a topic related to network, for example, using
  network uuid as the topic. If one port is activated in the subnet,
  only those agents where there are VMs of the same network should
  receive the L2-pop message.  This is parial-mesh, the original design
  purpose, but not implemented yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1337717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413231] Re: Traceback when creating VxLAN network using CSR plugin

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Incomplete

** Changed in: neutron
   Importance: Low => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413231

Title:
  Traceback when creating VxLAN network using CSR plugin

Status in neutron:
  Incomplete

Bug description:
  OpenStack Version: Kilo

  localadmin@qa1:~$ nova-manage version
  2015.1
  localadmin@qa1:~$ neutron --version
  2.3.10

  I’m trying to run the vxlan tests on my multi node setup and I’m
  seeing the following  error/traceback in the
  screen-q-ciscocfgagent.log when creating a network with a vxlan
  profile.

  The error complains that it can’t find the nrouter-56f2cf VRF but it
  is present on the CSR.

  VRF is configured on the CSR – regular VLAN works fine

  csr#show run | inc vrf
  vrf definition Mgmt-intf
  vrf definition nrouter-56f2cf
   vrf forwarding nrouter-56f2cf
   vrf forwarding nrouter-56f2cf
   vrf forwarding nrouter-56f2cf
  ip nat inside source list acl_756 interface GigabitEthernet3.757 vrf 
nrouter-56f2cf overload
  ip nat inside source list acl_758 interface GigabitEthernet3.757 vrf 
nrouter-56f2cf overload
  ip nat inside source static 10.11.12.2 172.29.75.232 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 10.11.12.4 172.29.75.233 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 10.11.12.5 172.29.75.234 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 210.168.1.2 172.29.75.235 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 210.168.1.4 172.29.75.236 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 210.168.1.5 172.29.75.237 vrf nrouter-56f2cf 
match-in-vrf
  ip route vrf nrouter-56f2cf 0.0.0.0 0.0.0.0 172.29.75.225
  csr#


  2015-01-19 12:22:09.896 DEBUG neutron.agent.linux.utils [-] 
  Command: ['ping', '-c', '5', '-W', '1', '-i', '0.2', '10.0.100.10']
  Exit code: 0
  Stdout: 'PING 10.0.100.10 (10.0.100.10) 56(84) bytes of data.\n64 bytes from 
10.0.100.10: icmp_seq=1 ttl=255 time=1.74 ms\n64 bytes from 10.0.100.10: 
icmp_seq=2 ttl=255 time=1.09 ms\n64 bytes from 10.0.100.10: icmp_seq=3 ttl=255 
time=0.994 ms\n64 bytes from 10.0.100.10: icmp_seq=4 ttl=255 time=0.852 ms\n64 
bytes 
  from 10.0.100.10: icmp_seq=5 ttl=255 time=0.892 ms\n\n--- 10.0.100.10 ping 
statistics ---\n5 packets transmitted, 5 received, 0% packet loss, time 
801ms\nrtt min/avg/max/mdev = 0.852/1.116/1.748/0.328 ms\n'
  Stderr: '' from (pid=13719) execute 
/opt/stack/neutron/neutron/agent/linux/utils.py:79
  2015-01-19 12:22:09.897 DEBUG neutron.plugins.cisco.cfg_agent.device_status 
[-] Hosting device: 27b14fc6-b1c9-4deb-8abe-ae3703a4af2d@10.0.100.10 is 
reachable. from (pid=13719) is_hosting_device_reachable 
/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/device_status.py:115
  2015-01-19 12:22:10.121 INFO 
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver [-] 
VRFs:[]
  2015-01-19 12:22:10.122 ERROR 
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver [-] 
VRF nrouter-56f2cf not present
  2015-01-19 12:22:10.237 DEBUG 
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver [-] 
RPCReply for CREATE_SUBINTERFACE is protocoloperation-failederror
 from (pid=13719) _check_response 
/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/device_drivers/csr1kv/csr1kv_routing_driver.py:676
  2015-01-19 12:22:10.238 ERROR 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper [-] Error 
executing snippet:CREATE_SUBINTERFACE. ErrorType:protocol 
ErrorTag:operation-failed.
  2015-01-19 12:22:10.238 ERROR 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper [-] Driver 
Exception on router:56f2cfbc-61c6-45dc-94d5-0cbb08b05053. Error is Error 
executing snippet:CREATE_SUBINTERFACE. ErrorType:protocol 
ErrorTag:operation-failed.

  
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper Traceback 
(most recent call last):
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper   File 
"/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/service_helpers/routing_svc_helper.py",
 line 379, in _process_routers
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper 
self._process_router(ri)
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper   File 
"/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/service_helpers/routing_svc_helper.py",
 line 452, in _process_router
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper 
LOG.error(e)
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper   File 
"/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 82, in 
__exit__
  

[Yahoo-eng-team] [Bug 1418786] Re: more than one port got created for VM

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1418786

Title:
  more than one port got created for VM

Status in neutron:
  Incomplete

Bug description:
  If server with neutron-server service have not enough resources for
  fast processing of requests then there is a high risk of multiple port
  created for VM during scheduling/networking process.

  How to reproduce:

  Just get some environment with not so fast neutron-server service node and/or 
mysql server. Try to spawn bunch of VMs.
  Some of them will got two ports created (in neutron DB they will have same 
device_id). If the system is very slow they could get three of them. If VMs 
would be spawned it will have only last one which nova got from neutron and 
this port will be the only active one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1418786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388230] Re: Checks for DB models and migrations sync not working

2015-10-09 Thread Henry Gessau
No.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1388230

Title:
  Checks for DB models and migrations sync not working

Status in neutron:
  Invalid

Bug description:
  I noticed a couple of issues, which might be related.

  
  1. "db-manage revision --autogenerate" on master with no code changes 
generates:

  def upgrade():
  op.drop_index('idx_autoinc_vr_id', 
table_name='ha_router_vrid_allocations')

  
  2. With the following change to the IPAllocation() model, the revision is not 
detected. Also, the unit tests for model/migration sync do not give an error.

  diff --git a/neutron/db/models_v2.py b/neutron/db/models_v2.py
  --- a/neutron/db/models_v2.py
  +++ b/neutron/db/models_v2.py
  @@ -98,8 +98,8 @@ class IPAllocation(model_base.BASEV2):
   
   port_id = sa.Column(sa.String(36), sa.ForeignKey('ports.id',
ondelete="CASCADE"),
  -nullable=True)
  -ip_address = sa.Column(sa.String(64), nullable=False, primary_key=True)
  +nullable=True, primary_key=True)
  +ip_address = sa.Column(sa.String(64), nullable=False)
   subnet_id = sa.Column(sa.String(36), sa.ForeignKey('subnets.id',
  ondelete="CASCADE"),
 nullable=False, primary_key=True)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1388230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369266] Re: HA router priority should be according to configurable priority of L3-agent

2015-10-09 Thread Armando Migliaccio
If Assaf says 'nay', I think this can't go ahead. Please pick it up, and
elaborate more.

** Changed in: neutron
   Status: Opinion => Won't Fix

** Changed in: neutron
   Status: Won't Fix => Incomplete

** Changed in: neutron
 Assignee: yalei wang (yalei-wang) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369266

Title:
  HA router priority should be according to configurable priority of
  L3-agent

Status in neutron:
  Incomplete

Bug description:
  Currently all instances have the same priority (hard coded 50)
  Admin should be able to assign priority to l3-agents so that Master will be 
chosen accordingly (suppose that you have an agent with smaller bandwidth than 
others, you would like it to have the least amount possible of active (Master) 
instances.
  This will require extending the L3-agent API

  This is blocked by bug:
  https://bugs.launchpad.net/neutron/+bug/1365429

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372792] Re: Inconsistent timestamp formats in ceilometer metering messages

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Incomplete

** Changed in: neutron
 Assignee: Eugene Nikanorov (enikanorov) => (unassigned)

** Changed in: neutron
   Importance: High => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372792

Title:
  Inconsistent timestamp formats in ceilometer metering messages

Status in neutron:
  Incomplete

Bug description:
  The messages generated by neutron-metering-agent contain timestamps in
  a different format than the other messages received through UDP from
  ceilometer-agent-notification. This creates unnecessary troubles for
  whoever is trying to decode the messages and do something useful with
  them.

  I particular, up to now, I found out about the timestamp field in the
  bandwidth message.

  They contain UTC dates (I hope), but there is no Z at the end, and
  they contain a space instead of a T between date and time. In short,
  they are not in ISO8601 as the timestamps in the other messages. I
  found out about them because elasticsearch tries to parse them and
  fails, throwing away the message.

  This bug was filed against Ceilometer, but I have been redirected here:
  https://bugs.launchpad.net/ceilometer/+bug/1370607

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440705] Re: Creating listeners with invalid/empty tenantid is not throwing error

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440705

Title:
  Creating listeners  with invalid/empty tenantid is not throwing error

Status in neutron:
  New

Bug description:
  Creating listeners  with invalid/empty tenant is successful (with 
logging_noop driver).It should throw error during validation.
  Following is tempest test (with logging_noop driver backend) logs:


  
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_invalid_empty_tenant_id[negative]
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron_lbaas/tests/tempest/v2/api/test_listeners.py", line 89, 
in test_create_listener_invalid_empty_tenant_id
  tenant_id="")
File 
"/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 422, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: > 
returned {u'default_pool_id': None, u'protocol': u'HTTP', u'description': u'', 
u'sni_container_ids': [], u'admin_state_up': True, u'loadbalancers': [{u'id': 
u'c57117f3-9675-440e-b41a-d3c51bfb1719'}], u'tenant_id': u'', 
u'default_tls_container_id': None, u'connection_limit': -1, u'protocol_port': 
8081, u'id': u'18d515d2-0dcb-4306-8dd8-c45ae1e068a6', u'name': u''}

  
  
neutron_lbaas.tests.tempest.v2.api.test_listeners.ListenersTestJSON.test_create_listener_invalid_tenant_id[negative]
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron_lbaas/tests/tempest/v2/api/test_listeners.py", line 76, 
in test_create_listener_invalid_tenant_id
  tenant_id="&^%123")
File 
"/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 422, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: > 
returned {u'default_pool_id': None, u'protocol': u'HTTP', u'description': u'', 
u'sni_container_ids': [], u'admin_state_up': True, u'loadbalancers': [{u'id': 
u'c57117f3-9675-440e-b41a-d3c51bfb1719'}], u'tenant_id': u'&^%123', 
u'default_tls_container_id': None, u'connection_limit': -1, u'protocol_port': 
8080, u'id': u'e648907e-758c-44be-be9e-9249adab1071', u'name': u''}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1440705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498704] Re: Tenant junk entries found in the controller Even after deletion

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Incomplete

** Changed in: neutron
 Assignee: Hong Hui Xiao (xiaohhui) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498704

Title:
  Tenant junk entries found in the controller Even after deletion

Status in neutron:
  Incomplete

Bug description:
  Steps to Reproduce:-

  1) Create a few tenants(may be 4 to 5)
  2) Create Networks,subnets, routers and connect them
  3) Create vpnaas(site-to-site connection) in between two tenants
  4) Boot up vms and docker containers behind the vpnaas
  5) Send PING traffic between the docker containers across tenants
  6) After the test, delete all the namespaces, networks tenants etc
  7) Check in the Openstack dashboard and cli, the networks related tio 
different tenants are
  still listed as "Active" with the tenant names which are ALREADY DELETED.

  This issue is found in Juno release:
  root@controller:~# nova-manage version
  2014.2.3
  root@controller:~#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450067] Re: Server with ML2 & L3 service plugin exposes dvr extension even if OVS agent is unused

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450067

Title:
  Server with ML2 & L3 service plugin exposes dvr extension even if OVS
  agent is unused

Status in neutron:
  Confirmed

Bug description:
  In a deployment using the L3 service plugin, the DVR extension is
  always declared as available, this is even if the ML2 OVS mech driver
  is not configured. Deployments could be using the LB mech driver or
  others. Not only is the extension declared, DVR router creation is not
  blocked. We should not rely only on documentation, but additionally
  provide expected behavior (Fail gracefully, not expose the extension).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461102] Re: cascade in orm relationships shadows ON DELETE CASCADE

2015-10-09 Thread Armando Migliaccio
Have you formulated an opinion?

** Changed in: neutron
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461102

Title:
  cascade in orm relationships shadows ON DELETE CASCADE

Status in neutron:
  Incomplete

Bug description:
  In [1] there is a good discussion on how the 'cascade' property
  specified for sqlachemy.orm.relationship interacts with the 'ON DELETE
  CASCADE' specified in DDL.

  I stumbled on this when I was doing some DB access profiling and
  noticed multiple DELETE statements were emitted for a delete subnet
  operation [2], whereas I expected a single DELETE statement only; I
  expected that the cascade behaviour configured on db tables would have
  taken care of DNS servers, host routes, etc.

  What is happening is that sqlalchemy is perform orm-level cascading
  rather than relying on the database foreign key cascade options. And
  it's doing this because we told it to do so. As the SQLAlchemy
  documentation points out [3] there is no need to add the complexity of
  orm relationships if foreign keys are correctly configured on the
  database, and the passive_deletes option should be used.

  Enabling such option in place of all the cascade options for relationship 
caused a single DELETE statement to be issued [4].
  This is not a massive issue (possibly the time spent in extra queries is just 
.5ms), but surely it is something worth doing - if nothing else because it 
seems Neutron is not using SQLAlchemy in the correct way.

  As someone who's been doing this mistake for ages, for what is worth
  this has been for me a moment where I realized that sometimes it's
  good to be told RTFM.

  
  [1] http://docs.sqlalchemy.org/en/latest/orm/cascades.html
  [2] http://paste.openstack.org/show/256289/
  [3] http://docs.sqlalchemy.org/en/latest/orm/collections.html#passive-deletes
  [4] http://paste.openstack.org/show/256301/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372441] Re: Creating port in dual-stack network requires IPv4 and IPv6 addresses to be allocated

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Confirmed

** Changed in: neutron
 Assignee: Alexey I. Froloff (raorn) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372441

Title:
  Creating port in dual-stack network requires IPv4 and IPv6 addresses
  to be allocated

Status in neutron:
  Confirmed

Bug description:
  Currently, when creating port in dual-stack network (network with one
  ore more IPv4 subnets and one or more IPv6 subnets), Neutron allocates
  fixed-ip from both, v4 and v6 subnets.  If one of the subnets do not
  have available addresses, it is considered as error.

  IMO, this is wrong.  Operation should succeed if at least one address
  (IPv4 of IPv6) was allocated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456871] Re: objects.InstanceList.get_all(context, ['metadata', 'system_metadata']) return error can't locate strategy for %s %s" % (cls, key)

2015-10-09 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456871

Title:
  objects.InstanceList.get_all(context, ['metadata','system_metadata'])
  return error can't locate strategy for %s %s" % (cls, key)

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When invoke

  objects.InstanceList.get_all(context, ['metadata','system_metadata'])

  
  Then found the nova/objects/instance.py  function  
_expected_cols(expected_attrs):

  will return list ['metadata','system_metadata', 'extra',
  'extra.flavor'], then in the db query it throw the error: can't locate
  strategy for 
  (('lazy', 'joined'),)

  Could anyone can help have a look? Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1456871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438159] Re: Made neutron agents silent by using AMQP

2015-10-09 Thread Armando Migliaccio
** Tags added: loadimpact

** Changed in: neutron
   Status: Opinion => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438159

Title:
  Made neutron agents silent by using AMQP

Status in neutron:
  Confirmed

Bug description:
  Problem.: Neutron agents does a lot of periodic task which leads  an rpc call 
+ database transaction, which does not even provide a new information, because 
nothing changed.
  This behaviour in scale can be called as `DDOS attack`, generally this kind 
of architecture is bad at scaling and can be considered as an any-pattern.

  Instead of periodic poll, we can leverage the AMQP brokers bind capabilities.
  Neutron has many situation like security group rule change or dvr related 
changes which needs to be communicated with multiple agents, but usually not 
with all agent.

  The agent at startup needs to synchronise the as usual, but during the
  sync the agent can subscribe to the interesting events to avoid the
  periodic tasks. (Note.: After the first subscribe loop a second one is
  needed to do not miss changes during the subscribe process ).

  The AMQP queues with 'auto-delete' can be considered as a reliable source of 
information which does not miss any event notification.
  On connection loss or full broker cluster die the agent needs to re sync 
everything guarded in this way,
  in these cases, the queue will disappear so the situation easily detectable.

  1. Create a Direct exchange for all kind of resources what needs
  to be synchronised in this way, for example.: 'neutron.securitygroups'
  . The exchange declaration needs to happen at q-svc start-up time or
  at full broker cluster die (not-found exception will tell it). The
  exchange SHOULD NOT be redeclared or verified at every message
  publish.

  2. Every agent creates a dedicated per agent queue with auto-delete flag, if 
the agent already maintains a queue with this property he MAY reuse that one. 
The agents SHOULD avoid to creating multiple queues per resource type. The 
messages MUST contain a type information.
  3. All agent creates a binding between his queue and the resource type queue 
with he realise he depends on the resource, for example it maintains at least 
one port with the given security-group. (The agents needs to remove the 
binding. when they stop using it)
  4. The q-svc publishes just a single message  when the resource related 
change happened. The routing key is the uuid.

  Alternatively a topic exchange can be used, with a single  exchange.
  In this case the routing keys MUST contains the resource type like: 
neutron.. ,
  this type exchange is generally more expensive than a direct exchange 
(pattern matching), and only useful if you have agents which needs to listens 
to ALL event related to a type, but others just interested just in a few of 
them.

  Edit: 
  Bindings MAY be added by the sender as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437762] Re: portbindingsport does not have all ports causing ml2 migration failure

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1437762

Title:
  portbindingsport does not have all ports causing ml2 migration failure

Status in neutron:
  Incomplete

Bug description:
  I am trying to move from havana to icehouse on ubuntu 14.0.4. The
  migration is failing because the ml2 migration is expecting the
  portbindingsport table to contain all the ports. However my record
  count in ports is 460 and in portsportbinding just 192. Thus only 192
  records get added to ml2_port_bindings.

  The consequence of this is that the network-node and the compute nodes
  are adding "unbound" interfaces in the ml2_port_bindings table.
  Additionally nova-compute is update its network info with wrong
  information causing a subsequent restart of nova-compute to fail with
  an error of "vif_type=unbound". Besides that the instances on the
  nodes do not get network connectivity.

  Let's say I am happy that I made a backup, because the DB gets into a
  inconsistent state every time now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1437762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1182704] Re: quantum-ns-metadata-proxy path is not configurable

2015-10-09 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1182704

Title:
  quantum-ns-metadata-proxy path is not configurable

Status in tripleo:
  Fix Released

Bug description:
  We are running quantum in a virtualenv, which means quantum-ns-
  metadata-proxy isn't on root's path; adjusting the filters etc is
  doable, but the netns command run has just 'quantum-ns-metadata-proxy'
  to execute, rather than allowing us to supply the full path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tripleo/+bug/1182704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177579] Re: deletion of token for client causes failure

2015-10-09 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1177579

Title:
  deletion of token for client causes failure

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Used a script to get a token, use that token to delete a server, then
  delete the token (in an effort to cleanup). The server deletion got
  stuck, and the following error was logged:

  unsupported operand type(s) for +: 'NoneType' and 'str'
    File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 231, 
in decorated_function
      return function(self, context, *args, **kwargs)
    File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1401, 
in terminate_instance
      do_terminate_instance(instance, bdms)
    File "/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", 
line 242, in inner
      retval = f(*args, **kwargs)
    File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1393, 
in do_terminate_instance
      reservations=reservations)
    File "/usr/lib/python2.6/site-packages/nova/hooks.py", line 88, in inner
      rv = f(*args, **kwargs)
    File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1356, 
in _delete_instance
      project_id=project_id)
    File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
      self.gen.next()
    File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1329, 
in _delete_instance
      self._shutdown_instance(context, instance, bdms)
    File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1254, 
in _shutdown_instance
      network_info = self._get_instance_nw_info(context, instance)
    File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 691, 
in _get_instance_nw_info
      instance, conductor_api=self.conductor_api)
    File "/usr/lib/python2.6/site-packages/nova/network/quantumv2/api.py", line 
363, in get_instance_nw_info
      result = self._get_instance_nw_info(context, instance, networks)
    File "/usr/lib/python2.6/site-packages/nova/network/quantumv2/api.py", line 
371, in _get_instance_nw_info
      nw_info = self._build_network_info_model(context, instance, networks)
    File 
"/usr/lib/python2.6/site-packages/nova/network/quantumv2/ibmpowervm_api.py", 
line 27, in _build_network_info_model
      networks)
    File "/usr/lib/python2.6/site-packages/nova/network/quantumv2/api.py", line 
794, in _build_network_info_model
      instance['project_id'])
    File "/usr/lib/python2.6/site-packages/nova/network/quantumv2/api.py", line 
118, in _get_available_networks
      nets = quantum.list_networks(**search_opts).get('networks', [])
    File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", line 
108, in with_params
      ret = self.function(instance, *args, **kwargs)
    File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", line 
294, in list_networks
      **_params)
    File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", line 
1002, in list
      for r in self._pagination(collection, path, **params):
    File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", line 
1015, in _pagination
      res = self.get(path, params=params)
    File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", line 
988, in get
      headers=headers, params=params)
    File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", line 
973, in retry_request
      headers=headers, params=params)
    File "/usr/lib/python2.6/site-packages/quantumclient/v2_0/client.py", line 
907, in do_request
      resp, replybody = self.httpclient.do_request(action, method, body=body)
    File "/usr/lib/python2.6/site-packages/quantumclient/client.py", line 154, 
in do_request
      self.authenticate()
    File "/usr/lib/python2.6/site-packages/quantumclient/client.py", line 183, 
in authenticate
      token_url = self.auth_url + "/tokens"

  Deleting the token when the client is done with it should not cause a
  problem. If OpenStack needs a token internally, it should get one
  itself. And indeed we found that the code tried to do just that when
  the token created by the test script stopped working, but the
  reauthentication code is flawed... it does not have the auth_url,
  which led to the error shown in the stacktrace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1177579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1207196] Re: Documentation tells that to create subnet and port (using xml formatted body), no request body is required while without request body subnet or port can't be created

2015-10-09 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1207196

Title:
  Documentation tells that to create subnet and port (using xml
  formatted body), no request body is required while without request
  body subnet or port can't be created

Status in openstack-api-site:
  Fix Released

Bug description:
  In the page http://api.openstack.org/api-ref.html#netconn-api
  Subnet and Port section tells that subnet/port creation does not require a 
request body.

  While without request body no subnet/port can be created.
  When I tried subnet creation without body then 400(bad request response) and 
body required error displayed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1207196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268955] Re: OVS agent updates the wrong port when using Xen + Neutron with HVM or PVHVM

2015-10-09 Thread Armando Migliaccio
** Changed in: neutron
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268955

Title:
  OVS agent updates the wrong port when using Xen + Neutron with HVM or
  PVHVM

Status in neutron:
  Incomplete

Bug description:
  Environment
  ==
  - Xen Server 6.2
  - OpenStack Havana installed with Packstack
  - Neutron OVS agent using VLAN

  From time to time, when an instance is started, it fails to get
  network connectivity. As a result the instance cannot get its IP
  address from DHCP and it remains unreachable.

  After further investigation, it appears that the OVS agent running on
  the compute node is updating the wrong OVS port because on startup, 2
  ports exist for the same instance: vifX.0 and tapX.0. The agent
  updates whatever port is returned in first position (see logs below).
  Note that the tapX.0 port is only transient and disappears after a few
  seconds.

  Workaround
  ==

  Manually update the OVS port on dom0:

  $ ovs-vsctl set Port vif17.0 tag=1

  OVS Agent logs
  

  2014-01-14 14:15:11.382 18268 DEBUG neutron.agent.linux.utils [-] Running 
command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', '--', '--columns=external_ids,name,ofport', 'find', 
'Interface', 'external_ids:iface-id="98679ab6-b879-4b1b-a524-01696959d468"'] 
execute /usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:43
  2014-01-14 14:15:11.403 18268 DEBUG qpid.messaging.io.raw [-] SENT[3350c68]: 
'\x0f\x01\x00\x19\x00\x01\x00\x00\x00\x00\x00\x00\x04\n\x01\x00\x07\x00\x010\x00\x00\x00\x00\x01\x0f\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x02\n\x01\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x81'
 writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480
  2014-01-14 14:15:11.649 18268 DEBUG neutron.agent.linux.utils [-]
  Command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', '--', '--columns=external_ids,name,ofport', 'find', 
'Interface', 'external_ids:iface-id="98679ab6-b879-4b1b-a524-01696959d468"']
  Exit code: 0
  Stdout: 'external_ids: {attached-mac="fa:16:3e:46:1e:91", 
iface-id="98679ab6-b879-4b1b-a524-01696959d468", iface-status=active, 
xs-network-uuid="b2bf90df-be17-a4ff-5c1e-3d69851f508a", 
xs-vif-uuid="2d2718d8-6064-e734-2737-cdcb4e06efc4", 
xs-vm-uuid="7f7f1918-3773-d97c-673a-37843797f70a"}\nname: 
"tap29.0"\nofport  : 52\n\nexternal_ids: 
{attached-mac="fa:16:3e:46:1e:91", 
iface-id="98679ab6-b879-4b1b-a524-01696959d468", iface-status=inactive, 
xs-network-uuid="b2bf90df-be17-a4ff-5c1e-3d69851f508a", 
xs-vif-uuid="2d2718d8-6064-e734-2737-cdcb4e06efc4", 
xs-vm-uuid="7f7f1918-3773-d97c-673a-37843797f70a"}\nname: 
"vif29.0"\nofport  : 51\n\n'
  Stderr: '' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:60
  2014-01-14 14:15:11.650 18268 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Port 
98679ab6-b879-4b1b-a524-01696959d468 updated. Details: {u'admin_state_up': 
True, u'network_id': u'ad37f107-074b-4c58-8f36-4705533afb8d', 
u'segmentation_id': 100, u'physical_network': u'default', u'device': 
u'98679ab6-b879-4b1b-a524-01696959d468', u'port_id': 
u'98679ab6-b879-4b1b-a524-01696959d468', u'network_type': u'vlan'}
  2014-01-14 14:15:11.650 18268 DEBUG neutron.agent.linux.utils [-] Running 
command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', 'set', 'Port', 'tap29.0', 'tag=1'] execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:43
  2014-01-14 14:15:11.913 18268 DEBUG neutron.agent.linux.utils [-]
  Command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', 'set', 'Port', 'tap29.0', 'tag=1']
  Exit code: 0
  Stdout: '\n'
  Stderr: '' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:60

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1268955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277217] Re: Cisco plugin should use common network type consts

2015-10-09 Thread Armando Migliaccio
** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277217

Title:
  Cisco plugin should use common network type consts

Status in networking-cisco:
  New

Bug description:
  The Cisco plugin was not covered by
  4cdccd69a45aec19d547c10f29f61359b69ad6c1

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1277217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504234] Re: Checkmarks in project picker in the wrong place

2015-10-09 Thread Brad Pokorny
Thanks, Lin.  Rerunning collectstatic did the trick.  I didn't
specifically have to drop the static folder, but just ran these
commands:

$ ./manage.py collectstatic
$ ./manage.py compress

And the checkmark is now in the right place.  So seems like we need to
run collectstatic now after switching Horizon to use v3 keystone.

I'll close this bug as invalid.  Could you link the similar bug you
mentioned to this one?

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1504234

Title:
  Checkmarks in project picker in the wrong place

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The checkmark showing the current project has changed positions in
  some cases and looks strange.  I've seen cases where it's still in the
  right place, but other places where it looks like this:

  http://imgur.com/1oTPf9M

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1504234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393780] Re: Do not import objects, only modules

2015-10-09 Thread Roman Vasilets
** Changed in: glance
 Assignee: Roman Vasilets (rvasilets) => (unassigned)

** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1393780

Title:
  Do not import objects, only modules

Status in Glance:
  Invalid

Bug description:
  Due to http://docs.openstack.org/developer/hacking/ [H302] We do not import 
objects, only modules . Exceptions are:
  imports from migrate package
  imports from sqlalchemy package
  imports from oslo-incubator.openstack.common.gettextutils module

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1393780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504527] [NEW] network_device_mtu not documented in agent config files

2015-10-09 Thread Ihar Hrachyshka
Public bug reported:

There is no network_device_mtu notion in agent config files, while it's
a supported and useful option.

-bash-4.2$ grep network_device_mtu -r etc/
-bash-4.2$

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: l3-ipam-dhcp linuxbridge low-hanging-fruit ovs

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
   Status: New => Confirmed

** Tags added: l3-ipam-dhcp linuxbridge low-hanging-fruit ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504527

Title:
  network_device_mtu not documented in agent config files

Status in neutron:
  Confirmed

Bug description:
  There is no network_device_mtu notion in agent config files, while
  it's a supported and useful option.

  -bash-4.2$ grep network_device_mtu -r etc/
  -bash-4.2$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504235] Re: Can't run instance with non-ascii symbols in user_data

2015-10-09 Thread Markus Zoeller (markus_z)
This seems to go the same direction as bug 1472999. I guess the
novaclient has to encode the input as well as the nova api. Whoever
takes this bug could maybe also update the wiki page from 2013:
https://wiki.openstack.org/wiki/Encoding

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Tags added: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504235

Title:
  Can't run instance with non-ascii symbols in user_data

Status in OpenStack Compute (nova):
  New
Status in python-novaclient:
  New

Bug description:
  Gating of ec2api  project has a test that runs instance with user data that 
contains non-ascii symbols under python 2.7.
  And now it fails because of fix bug 
https://bugs.launchpad.net/nova/+bug/1502583

  Version: master branch

  stack trace from nova compute [1]:

  2015-10-08 16:50:00.267 ERROR nova.compute.manager 
[req-eb417914-b76f-42b3-a825-7e86785d2bbe user-9eea8ece project-e7ccb1a9] 
[instance: 529a79f0-6707-4882-af0f-63a5af1fa111] Instance failed to spawn
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111] Traceback (most recent call last):
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2172, in _build_resources
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111] yield resources
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2019, in 
_build_and_run_instance
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111] block_device_info=block_device_info)
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2437, in spawn
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111] admin_pass=admin_password)
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2942, in _create_image
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111] cdb.make_drive(configdrive_path)
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111]   File 
"/opt/stack/new/nova/nova/virt/configdrive.py", line 163, in make_drive
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111] self._write_md_files(tmpdir)
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111]   File 
"/opt/stack/new/nova/nova/virt/configdrive.py", line 98, in _write_md_files
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111] self._add_file(basedir, data[0], 
data[1])
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111]   File 
"/opt/stack/new/nova/nova/virt/configdrive.py", line 90, in _add_file
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111] f.write(data.encode('utf-8'))
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111] UnicodeDecodeError: 'ascii' codec can't 
decode byte 0xd0 in position 36: ordinal not in range(128)
  2015-10-08 16:50:00.267 32724 ERROR nova.compute.manager [instance: 
529a79f0-6707-4882-af0f-63a5af1fa111] 


  
  [1] 
http://logs.openstack.org/50/232550/1/check/gate-functional-neutron-dsvm-ec2api/2eda628/logs/screen-n-cpu.txt.gz#_2015-10-08_16_50_00_267

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1504235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504557] [NEW] timeutils.isotime() deprecated and replaced

2015-10-09 Thread Brandon Palm
Public bug reported:

According to the oslo document here:
http://docs.openstack.org/developer/oslo.utils/api/timeutils.html#oslo_utils.timeutils.isotime

oslo_utils.timeutils.isotime() is being deprecated.  It is being used in 
neutron in:
/neutron/db/agents_db.py

You can see it causes a stderr in the gate checks here for example:
http://logs.openstack.org/82/228582/13/check/gate-neutron-python34/5b36c34/console.html

** Affects: neutron
 Importance: Undecided
 Assignee: Brandon Palm (bapalm)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Brandon Palm (bapalm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504557

Title:
  timeutils.isotime() deprecated and replaced

Status in neutron:
  In Progress

Bug description:
  According to the oslo document here:
  
http://docs.openstack.org/developer/oslo.utils/api/timeutils.html#oslo_utils.timeutils.isotime

  oslo_utils.timeutils.isotime() is being deprecated.  It is being used in 
neutron in:
  /neutron/db/agents_db.py

  You can see it causes a stderr in the gate checks here for example:
  
http://logs.openstack.org/82/228582/13/check/gate-neutron-python34/5b36c34/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504540] [NEW] n-cauth failed in devstack NoSuchOptError consoleauth_topic

2015-10-09 Thread Oleksii Zamiatin
Public bug reported:

Deploy default devstack configuration, see this error in n-cauth screen:

ozamiatin@ubuntu:~/devstack$ /usr/local/bin/nova-consoleauth --config-file 
/etc/nova/nova.conf & echo $! >/opt/stack/status/stack/n-cauth.pid; fg || echo 
"n-cauth failed to start" | tee "/opt/stack/status/stack/n-cauth.failure"
[1] 17664
/usr/local/bin/nova-consoleauth --config-file /etc/nova/nova.conf
No handlers could be found for logger "oslo_config.cfg"
2015-10-09 13:12:00.345 CRITICAL nova [-] NoSuchOptError: no such option: 
consoleauth_topic

2015-10-09 13:12:00.345 TRACE nova Traceback (most recent call last):
2015-10-09 13:12:00.345 TRACE nova   File "/usr/local/bin/nova-consoleauth", 
line 10, in 
2015-10-09 13:12:00.345 TRACE nova sys.exit(main())
2015-10-09 13:12:00.345 TRACE nova   File 
"/opt/stack/nova/nova/cmd/consoleauth.py", line 40, in main
2015-10-09 13:12:00.345 TRACE nova topic=CONF.consoleauth_topic)
2015-10-09 13:12:00.345 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 1902, in 
__getattr__
2015-10-09 13:12:00.345 TRACE nova raise NoSuchOptError(name)
2015-10-09 13:12:00.345 TRACE nova NoSuchOptError: no such option: 
consoleauth_topic
2015-10-09 13:12:00.345 TRACE nova

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504540

Title:
  n-cauth failed in devstack NoSuchOptError consoleauth_topic

Status in OpenStack Compute (nova):
  New

Bug description:
  Deploy default devstack configuration, see this error in n-cauth
  screen:

  ozamiatin@ubuntu:~/devstack$ /usr/local/bin/nova-consoleauth --config-file 
/etc/nova/nova.conf & echo $! >/opt/stack/status/stack/n-cauth.pid; fg || echo 
"n-cauth failed to start" | tee "/opt/stack/status/stack/n-cauth.failure"
  [1] 17664
  /usr/local/bin/nova-consoleauth --config-file /etc/nova/nova.conf
  No handlers could be found for logger "oslo_config.cfg"
  2015-10-09 13:12:00.345 CRITICAL nova [-] NoSuchOptError: no such option: 
consoleauth_topic

  2015-10-09 13:12:00.345 TRACE nova Traceback (most recent call last):
  2015-10-09 13:12:00.345 TRACE nova   File "/usr/local/bin/nova-consoleauth", 
line 10, in 
  2015-10-09 13:12:00.345 TRACE nova sys.exit(main())
  2015-10-09 13:12:00.345 TRACE nova   File 
"/opt/stack/nova/nova/cmd/consoleauth.py", line 40, in main
  2015-10-09 13:12:00.345 TRACE nova topic=CONF.consoleauth_topic)
  2015-10-09 13:12:00.345 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 1902, in 
__getattr__
  2015-10-09 13:12:00.345 TRACE nova raise NoSuchOptError(name)
  2015-10-09 13:12:00.345 TRACE nova NoSuchOptError: no such option: 
consoleauth_topic
  2015-10-09 13:12:00.345 TRACE nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1504540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504534] [NEW] o.vo master + nova master = unit tests fail

2015-10-09 Thread Davanum Srinivas (DIMS)
Public bug reported:

There's a whole bunch of test failures:
http://paste.openstack.org/show/475867/

Here's one example if the paste vanishes for some reason:
nova.tests.unit.objects.test_instance.TestRemoteInstanceListObject.test_get_hung_in_rebooting
-

Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/objects/test_instance.py", line 1597, in 
test_get_hung_in_rebooting
self.assertIsInstance(inst_list.objects[i], objects.Instance)
  File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 401, in assertIsInstance
self.assertThat(obj, matcher, msg)
  File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 
'Instance(access_ip_v4=1.2.3.4,access_ip_v6=::1,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive=None,created_at=None,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description=None,display_name=None,ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=,host='fake-host',hostname=None,id=2,image_ref=None,info_cache=,instance_type_id=None,kernel_id=None,key_data=None,key_name=None,launch_index=None,launched_at=1955-11-12T22:04:00Z,launched_on=None,locked=False,locked_by=None,memory_mb=None,metadata=,migration_context=,new_flavor=,node=None,numa_topology=,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=None,progress=None,project_id='fake-project',ramdisk_id=None,reservation_id=None,root_device_name=None,root_gb=0,scheduled_at=,security_groups=,shutdown_ter
 
minate=False,system_metadata=,tags=,task_state=None,terminated_at=None,updated_at=None,user_data=None,user_id='fake-user',uuid=c2169c75-2912-4a72-8df9-e3faa5f16578,vcpu_model=,vcpus=None,vm_mode=None,vm_state=None)'
 is not an instance of InstanceV2

@bauwser commented on IRC

bauwser dimsum__: oh man, I see 38 hits on isinstance([^,]+, objects.Instance)
bauwser dimsum__: which means all of them need to be changed
bauwser dimsum__: to be isinstance(obj, instance_obj._BaseInstance)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504534

Title:
  o.vo master + nova master = unit tests fail

Status in OpenStack Compute (nova):
  New

Bug description:
  There's a whole bunch of test failures:
  http://paste.openstack.org/show/475867/

  Here's one example if the paste vanishes for some reason:
  
nova.tests.unit.objects.test_instance.TestRemoteInstanceListObject.test_get_hung_in_rebooting
  
-

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/objects/test_instance.py", line 1597, in 
test_get_hung_in_rebooting
  self.assertIsInstance(inst_list.objects[i], objects.Instance)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 401, in assertIsInstance
  self.assertThat(obj, matcher, msg)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 
'Instance(access_ip_v4=1.2.3.4,access_ip_v6=::1,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive=None,created_at=None,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description=None,display_name=None,ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=,host='fake-host',hostname=None,id=2,image_ref=None,info_cache=,instance_type_id=None,kernel_id=None,key_data=None,key_name=None,launch_index=None,launched_at=1955-11-12T22:04:00Z,launched_on=None,locked=False,locked_by=None,memory_mb=None,metadata=,migration_context=,new_flavor=,node=None,numa_topology=,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=None,progress=None,project_id='fake-project',ramdisk_id=None,reservation_id=None,root_device_name=None,root_gb=0,scheduled_at=,security_groups=,shutdown_t
 
erminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,updated_at=None,user_data=None,user_id='fake-user',uuid=c2169c75-2912-4a72-8df9-e3faa5f16578,vcpu_model=,vcpus=None,vm_mode=None,vm_state=None)'
 is not an instance of InstanceV2
  
  @bauwser commented on IRC

  bauwser dimsum__: oh man, I see 38 hits on 

[Yahoo-eng-team] [Bug 1504536] [NEW] Provide stevedore aliases for interface_driver option

2015-10-09 Thread Ihar Hrachyshka
Public bug reported:

Currently, we require to set the full import path for those drivers.
It's both not user friendly, and error prone in case we decide later to
move the code to some other place.

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: low-hanging-fruit usability

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
   Status: New => Confirmed

** Tags added: low-hanging-fruit usability

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504536

Title:
  Provide stevedore aliases for interface_driver option

Status in neutron:
  Confirmed

Bug description:
  Currently, we require to set the full import path for those drivers.
  It's both not user friendly, and error prone in case we decide later
  to move the code to some other place.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500365] Re: neutron port API does not support atomicity

2015-10-09 Thread Armando Migliaccio
We are aware of some of the limitations of the existing API. This will
definitely require more attention across the board. Marking RFE, to
raise the profile of this bug and seek input from members of the drivers
team.

** Tags added: rfe

** Changed in: neutron
   Status: Opinion => Triaged

** Changed in: neutron
   Status: Triaged => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500365

Title:
  neutron port API does not support atomicity

Status in neutron:
  Confirmed

Bug description:
  Neutron port API offers an update method where the user of the API can say "I 
use this port" by setting the device_owner and device_id fields of the port. 
However the neutron API does not prevent port allocation race conditions.
  The API semantic is that a port is used if the device_id and the device_owner 
fields are set, and not used if they aren't.  Now lets have two clients that 
both want to set the ownership of the port. Both clients first have to check if 
the port is free or not by checking the value of the device_owner and device_id 
fields of the port, then they have to set the those fields to express 
ownership. 
  If the two clients act parallel it is pretty much possible that both clients 
see that the fields are empty and both issue the port update command. This can 
leads to race conditions between clients.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504641] [NEW] Listing volumes respects osapi_max_limit but does not provide a link to the next element

2015-10-09 Thread Artom Lifshitz
Public bug reported:

When GETting os-volumes, the returned list of volumes respects the
osapi_max_limit configuration parameter but does not provide a link to
the next element in the list. For example, with two volumes configured
and osapi_max_limit set to 1, GETting volumes results in the following:


{
"volumes": [
{
"attachments": [
{}
],
"availabilityZone": "nova",
"createdAt": "2015-10-09T18:12:04.00",
"displayDescription": null,
"displayName": null,
"id": "08792e26-204b-4bb9-8e9b-0e37331de51c",
"metadata": {},
"size": 1,
"snapshotId": null,
"status": "error",
"volumeType": "lvmdriver-1"
}
]
}


Unsetting osapi_max_limit results in both volumes being listed:


{
"volumes": [
{
"attachments": [
{}
],
"availabilityZone": "nova",
"createdAt": "2015-10-09T18:12:04.00",
"displayDescription": null,
"displayName": null,
"id": "08792e26-204b-4bb9-8e9b-0e37331de51c",
"metadata": {},
"size": 1,
"snapshotId": null,
"status": "error",
"volumeType": "lvmdriver-1"
},
{
"attachments": [
{}
],
"availabilityZone": "nova",
"createdAt": "2015-10-09T18:12:00.00",
"displayDescription": null,
"displayName": null,
"id": "5cf46cd2-8914-4ffd-9037-abd53c55ca76",
"metadata": {},
"size": 1,
"snapshotId": null,
"status": "error",
"volumeType": "lvmdriver-1"
}
]
}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504641

Title:
  Listing volumes respects osapi_max_limit but does not provide a link
  to the next element

Status in OpenStack Compute (nova):
  New

Bug description:
  When GETting os-volumes, the returned list of volumes respects the
  osapi_max_limit configuration parameter but does not provide a link to
  the next element in the list. For example, with two volumes configured
  and osapi_max_limit set to 1, GETting volumes results in the
  following:

  
  {
  "volumes": [
  {
  "attachments": [
  {}
  ],
  "availabilityZone": "nova",
  "createdAt": "2015-10-09T18:12:04.00",
  "displayDescription": null,
  "displayName": null,
  "id": "08792e26-204b-4bb9-8e9b-0e37331de51c",
  "metadata": {},
  "size": 1,
  "snapshotId": null,
  "status": "error",
  "volumeType": "lvmdriver-1"
  }
  ]
  }

  
  Unsetting osapi_max_limit results in both volumes being listed:

  
  {
  "volumes": [
  {
  "attachments": [
  {}
  ],
  "availabilityZone": "nova",
  "createdAt": "2015-10-09T18:12:04.00",
  "displayDescription": null,
  "displayName": null,
  "id": "08792e26-204b-4bb9-8e9b-0e37331de51c",
  "metadata": {},
  "size": 1,
  "snapshotId": null,
  "status": "error",
  "volumeType": "lvmdriver-1"
  },
  {
  "attachments": [
  {}
  ],
  "availabilityZone": "nova",
  "createdAt": "2015-10-09T18:12:00.00",
  "displayDescription": null,
  "displayName": null,
  "id": "5cf46cd2-8914-4ffd-9037-abd53c55ca76",
  "metadata": {},
  "size": 1,
  "snapshotId": null,
  "status": "error",
  "volumeType": "lvmdriver-1"
  }
  ]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1504641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504647] [NEW] Ensure new interface hashing does not break upgrades

2015-10-09 Thread Sean M. Collins
Public bug reported:

Code introduced in https://review.openstack.org/#/c/224064/ touches code
that did the interface name hashing. Let's ensure that it doesn't break
upgrades.


IRC conversation:

http://eavesdrop.openstack.org/irclogs/%23openstack-neutron
/%23openstack-neutron.2015-10-09.log.html#t2015-10-09T16:26:18

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: linuxbridge

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504647

Title:
  Ensure new interface hashing does not break upgrades

Status in neutron:
  Confirmed

Bug description:
  Code introduced in https://review.openstack.org/#/c/224064/ touches
  code that did the interface name hashing. Let's ensure that it doesn't
  break upgrades.

  
  IRC conversation:

  http://eavesdrop.openstack.org/irclogs/%23openstack-neutron
  /%23openstack-neutron.2015-10-09.log.html#t2015-10-09T16:26:18

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501729] Re: Launching an instance on devstack triggers to an error

2015-10-09 Thread Armando Migliaccio
Neutron has no say in what gets installed on the devstack box. So this
is a Neutron bug.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501729

Title:
  Launching an instance on devstack triggers to an error

Status in neutron:
  Invalid

Bug description:
  Preconditions:
  3.13.0-61-generic #100-Ubuntu
  Devstack

  Staps to reproduce:

  Login to Horizon as an Admin
  Navigate to Project -> Instance
  Hit Launch Instance button
  In opened window select:
  Availability Zone == Nova
  Instance Name == test_instance
  Flavor == m1.nano
  Instance Count ==1
  Instance Boot Source == Boot from image
  Image Name == cirros-0.3.4-x86_64-uec
  Hit launch button

  Expected result:
  Instance status is Active

  Actual result:
  Instance status is Error
  Error: Failed to perform requested operation on instance "test_instance", the 
instance has an error status: Please try again later [Error: Build of instance 
68718ad6-73b1-4ddb-a48d-da4265e336fa aborted: Failed to allocate the 
network(s), not rescheduling.].

  http://paste.openstack.org/show/474856/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503179] Re: nic ordering is inconsistent between hard reboot

2015-10-09 Thread Pavel Kholkin
this problem seems to be fixed, bug was not reproduced on devstack,
moved to invalid

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1503179

Title:
  nic ordering is inconsistent between hard reboot

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  If instance is assigned to several networks nic ordering is
  inconsistent between hard reboots (for neutron). This information
  could be found in interfaces section of instance xml file.

  Related-bug (for nova-network):
  https://bugs.launchpad.net/nova/+bug/1405271

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1503179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504540] Re: n-cauth failed in devstack NoSuchOptError consoleauth_topic

2015-10-09 Thread Oleksii Zamiatin
You are right, fixed already. Thanks!

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504540

Title:
  n-cauth failed in devstack NoSuchOptError consoleauth_topic

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Deploy default devstack configuration, see this error in n-cauth
  screen:

  ozamiatin@ubuntu:~/devstack$ /usr/local/bin/nova-consoleauth --config-file 
/etc/nova/nova.conf & echo $! >/opt/stack/status/stack/n-cauth.pid; fg || echo 
"n-cauth failed to start" | tee "/opt/stack/status/stack/n-cauth.failure"
  [1] 17664
  /usr/local/bin/nova-consoleauth --config-file /etc/nova/nova.conf
  No handlers could be found for logger "oslo_config.cfg"
  2015-10-09 13:12:00.345 CRITICAL nova [-] NoSuchOptError: no such option: 
consoleauth_topic

  2015-10-09 13:12:00.345 TRACE nova Traceback (most recent call last):
  2015-10-09 13:12:00.345 TRACE nova   File "/usr/local/bin/nova-consoleauth", 
line 10, in 
  2015-10-09 13:12:00.345 TRACE nova sys.exit(main())
  2015-10-09 13:12:00.345 TRACE nova   File 
"/opt/stack/nova/nova/cmd/consoleauth.py", line 40, in main
  2015-10-09 13:12:00.345 TRACE nova topic=CONF.consoleauth_topic)
  2015-10-09 13:12:00.345 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 1902, in 
__getattr__
  2015-10-09 13:12:00.345 TRACE nova raise NoSuchOptError(name)
  2015-10-09 13:12:00.345 TRACE nova NoSuchOptError: no such option: 
consoleauth_topic
  2015-10-09 13:12:00.345 TRACE nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1504540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321785] Re: RFE: block_device_info dict should have a password key rather than clear password

2015-10-09 Thread Matt Riedemann
** Also affects: oslo.versionedobjects
   Importance: Undecided
   Status: New

** Changed in: oslo.versionedobjects
   Status: New => Confirmed

** Changed in: oslo.versionedobjects
   Importance: Undecided => Medium

** Changed in: oslo.versionedobjects
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321785

Title:
  RFE: block_device_info dict should have a password key rather than
  clear password

Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.versionedobjects:
  Confirmed

Bug description:
  See bug 1319943 and the related patch
  https://review.openstack.org/#/c/93787/ for details, but right now the
  block_device_info dict passed around in the nova virt driver can
  contain a clear text password for the auth_password key.

  That bug and patch are masking the password when logged in the
  immediate known locations, but this could continue to crop up so we
  should change the design such that the block_device_info dict doesn't
  contain the password but rather a key to a store that nova can
  retrieve the password for use.

  Comment from Daniel Berrange in the patch above:

  "Long term I think we need to figure out a way to remove the passwords
  from any data dicts we pass around. Ideally the block device info
  would merely contain something like a UUID to identify a password,
  which Nova could use to fetch the actual password from a secure
  password manager service at time of use. Thus we wouldn't have to
  worry about random objects/dicts containing actual passwords.
  Obviously this isn't something we can do now, but could you file an
  RFE to address this from a design POV, because masking passwords at
  time of logging call is not really a viable long term strategy IMHO."

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp