[Yahoo-eng-team] [Bug 1589821] Re: cleanup_incomplete_migrations periodic task regression with commit 099cf53925c0a0275325339f21932273ee9ce2bc

2016-06-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326262
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=adcc0e418b7d880a0b0bd60ea9d0ef1e2ef4e67e
Submitter: Jenkins
Branch:master

commit adcc0e418b7d880a0b0bd60ea9d0ef1e2ef4e67e
Author: Rajesh Tailor 
Date:   Tue Jun 7 07:05:11 2016 +

Revert "Optimize _cleanup_incomplete_migrations periodic task"

The change modified instance filtering condition, which filters all
deleted instances on current host, which is not as expected by periodic
task.

The periodic task expects instances, whose instance uuid are associated
with migration record. And after filtering we only need to apply the
instance deletion logic on instance files where instance.host is not
set as current host (CONF.host).

This reverts commit 099cf53925c0a0275325339f21932273ee9ce2bc.

Change-Id: Ic71c939bef86f1e5cb485c6827c69c3d638f2e89
Closes-Bug: 1589821


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589821

Title:
  cleanup_incomplete_migrations periodic task regression with commit
  099cf53925c0a0275325339f21932273ee9ce2bc

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  Patch [1] changes the instance filtering condition in periodic task
  "cleanup_incomplete_migrations" introduced in [2], in such a way that
  it generates new issue, [3]

  After change [1] lands,  the condition changes filtering logic, so now
  all instances on current host are filtered, which is not expected.

  We should filter all instances where instance uuids are associated
  with migration records and those migration status is set to 'error'
  and instance is marked as deleted.

  [1] https://review.openstack.org/#/c/256102/
  [2] https://review.openstack.org/#/c/219299/
  [2] https://bugs.launchpad.net/nova/+bug/1586309

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1589821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384386] Re: Image block device mappings for snapshots of instances specify delete_on_termination=null

2016-06-08 Thread Takashi NATSUME
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384386

Title:
  Image block device mappings for snapshots of instances specify
  delete_on_termination=null

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova Juno

  Scenario:
  1. Boot an instance from a volume.
  2. Nova snapshot the instance.  This produces a Glance image with a block 
device mapping property like this:
  [{"guest_format": null, "boot_index": 0, "no_device": null, "snapshot_id": 
"1a642ca8-210f-4790-ab93-00b6a4b86a14", "delete_on_termination": null, 
"disk_bus": null, "image_id": null, "source_type": "snapshot", "device_type": 
"disk", "volume_id": null, "destination_type": "volume", "volume_size": null}]

  3. Create an instance from the Glance image.  Nova creates a new Cinder 
volume from the image's Cinder snapshot and attaches it to the instance.
  4. Delete the instance.

  Problem:  The Cinder volume created at step 3 remains.

  The block device mappings for Cinder snapshots created during VM
  snapshot and placed into the Glance image should specify
  "delete_on_termination":  True so that the Cinder volumes created for
  VMs booted from the image will be cleaned up on VM deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589880] Re: report state failed

2016-06-08 Thread Takashi NATSUME
** Tags added: oslo

** Also affects: oslo.service
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589880

Title:
  report state failed

Status in OpenStack Compute (nova):
  Incomplete
Status in oslo.service:
  New

Bug description:
  Description:
  =
  set master database read_only=on when switching master nova database to 
slave,after that,I check nova service status
  # nova-manage service list
  Binary   Host  Zone Status
 State Updated_At
  nova-consoleauth 11_120internal enabled   
 XXX   2016-06-07 08:28:46
  nova-conductor   11_120internal enabled   
 XXX   2016-06-07 08:28:45
  nova-cert11_120internal enabled   
 XXX   2016-05-17 08:12:10
  nova-scheduler   11_120internal enabled   
 XXX   2016-05-17 08:12:24
  nova-compute 11_121bx   enabled   
 XXX   2016-06-07 08:28:49
  nova-compute 11_122bx   enabled   
 XXX   2016-06-07 08:28:42
  =

  Steps to reproduce
  =
  # mysql
  MariaDB [nova]> set global read_only=on;
  =

  Environment
  
  Version:Liberty
  openstack-nova-conductor-12.0.0-1.el7.noarch

  Logs
  

  2016-05-12 11:01:20.343 9198 ERROR oslo.service.loopingcall 
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall [-] Fixed 
interval looping call 'nova.servicegroup.drivers.db.DbDriver._report_state' 
failed
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 113, in 
_run_loop
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", line 87, in 
_report_state
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
service.service_ref.save()
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 213, in 
wrapper
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall return 
fn(self, *args, **kwargs)
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/objects/service.py", line 250, in save
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall db_service = 
db.service_update(self._context, self.id, updates)
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/db/api.py", line 153, in service_update
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall return 
IMPL.service_update(context, service_id, values)
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 146, in wrapper
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall ectxt.value = 
e.inner_exc
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
six.reraise(self.type_, self.value, self.tb)
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 136, in wrapper
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall return 
f(*args, **kwargs)
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 532, in 
service_update
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
service_ref.update(values)
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 490, in 
__exit__
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
self.rollback()
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, 
in __exit__
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall 
compat.reraise(exc_type, exc_value, exc_tb)
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 487, in 
__exit__
  2016-05-12 11:01:20.473 9178 ERROR oslo.service.loopingcall self.commit()
  2016-05-12 

[Yahoo-eng-team] [Bug 1590608] Re: Services should use http_proxy_to_wsgi middleware

2016-06-08 Thread Jamie Lennox
Adding barbican though this seems to be mostly mitigated by pecan.

** Also affects: barbican
   Importance: Undecided
   Status: New

** Also affects: trove
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590608

Title:
  Services should use http_proxy_to_wsgi middleware

Status in Barbican:
  New
Status in Cinder:
  New
Status in Glance:
  New
Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack DBaaS (Trove):
  New

Bug description:
  It's a common problem when putting a service behind a load balancer to
  need to forward the Protocol and hosts of the original request so that
  the receiving service can construct URLs to the loadbalancer and not
  the private worker node.

  Most services have implemented some form of secure_proxy_ssl_header =
  HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
  dependent on the service.

  oslo.middleware provides the http_proxy_to_wsgi middleware that
  handles these headers and the newer RFC7239 forwarding header and
  completely hides the problem from the service.

  This middleware should be adopted by all services in preference to
  their own HTTP_X_FORWARDED_PROTO handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1590608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590608] Re: Services should use http_proxy_to_wsgi middleware

2016-06-08 Thread Jamie Lennox
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590608

Title:
  Services should use http_proxy_to_wsgi middleware

Status in Barbican:
  New
Status in Cinder:
  New
Status in Glance:
  New
Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  It's a common problem when putting a service behind a load balancer to
  need to forward the Protocol and hosts of the original request so that
  the receiving service can construct URLs to the loadbalancer and not
  the private worker node.

  Most services have implemented some form of secure_proxy_ssl_header =
  HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
  dependent on the service.

  oslo.middleware provides the http_proxy_to_wsgi middleware that
  handles these headers and the newer RFC7239 forwarding header and
  completely hides the problem from the service.

  This middleware should be adopted by all services in preference to
  their own HTTP_X_FORWARDED_PROTO handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1590608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590608] [NEW] Services should use http_proxy_to_wsgi middleware

2016-06-08 Thread Jamie Lennox
Public bug reported:

It's a common problem when putting a service behind a load balancer to
need to forward the Protocol and hosts of the original request so that
the receiving service can construct URLs to the loadbalancer and not the
private worker node.

Most services have implemented some form of secure_proxy_ssl_header =
HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
dependent on the service.

oslo.middleware provides the http_proxy_to_wsgi middleware that handles
these headers and the newer RFC7239 forwarding header and completely
hides the problem from the service.

This middleware should be adopted by all services in preference to their
own HTTP_X_FORWARDED_PROTO handling.

** Affects: barbican
 Importance: Undecided
 Status: New

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: keystone
 Importance: Low
 Assignee: Jamie Lennox (jamielennox)
 Status: In Progress

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590608

Title:
  Services should use http_proxy_to_wsgi middleware

Status in Barbican:
  New
Status in Cinder:
  New
Status in Glance:
  New
Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  It's a common problem when putting a service behind a load balancer to
  need to forward the Protocol and hosts of the original request so that
  the receiving service can construct URLs to the loadbalancer and not
  the private worker node.

  Most services have implemented some form of secure_proxy_ssl_header =
  HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
  dependent on the service.

  oslo.middleware provides the http_proxy_to_wsgi middleware that
  handles these headers and the newer RFC7239 forwarding header and
  completely hides the problem from the service.

  This middleware should be adopted by all services in preference to
  their own HTTP_X_FORWARDED_PROTO handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1590608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590607] [NEW] incorrect handling of host numa cell usage with instances having no numa topology

2016-06-08 Thread Chris Friesen
Public bug reported:

I think there is a problem in host NUMA node resource tracking when
there is an instance with no numa topology on the same node as instances
with numa topology.

It's triggered while running the resource audit, which ultimately calls
hardware.get_host_numa_usage_from_instance() and assigns the result to
self.compute_node.numa_topology.

The problem occurs if you have a number of instances with numa topology,
and then an instance with no numa topology. When running
numa_usage_from_instances() for the instance with no numa topology we
cache the values of "memory_usage" and "cpu_usage". However, because
instance.cells is empty we don't enter the loop. Since the two lines in
this commit are indented too far they don't get called, and we end up
appending a host cell with "cpu_usage" and "memory_usage" of zero.
This results in a host numa_topology cell with incorrect "cpu_usage" and
"memory_usage" values, though I think the overall host cpu/memory usage
is still correct.

The fix is to reduce the indentation of the two lines in question so
that they get called even when the instance has no numa topology. This
writes the original host cell usage information back to it.

** Affects: nova
 Importance: Undecided
 Assignee: Chris Friesen (cbf123)
 Status: New


** Tags: compute scheduler

** Changed in: nova
 Assignee: (unassigned) => Chris Friesen (cbf123)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590607

Title:
  incorrect handling of host numa cell usage with instances having no
  numa topology

Status in OpenStack Compute (nova):
  New

Bug description:
  I think there is a problem in host NUMA node resource tracking when
  there is an instance with no numa topology on the same node as
  instances with numa topology.

  It's triggered while running the resource audit, which ultimately
  calls hardware.get_host_numa_usage_from_instance() and assigns the
  result to self.compute_node.numa_topology.

  The problem occurs if you have a number of instances with numa
  topology, and then an instance with no numa topology. When running
  numa_usage_from_instances() for the instance with no numa topology we
  cache the values of "memory_usage" and "cpu_usage". However, because
  instance.cells is empty we don't enter the loop. Since the two lines
  in this commit are indented too far they don't get called, and we end
  up appending a host cell with "cpu_usage" and "memory_usage" of zero.
  This results in a host numa_topology cell with incorrect "cpu_usage"
  and "memory_usage" values, though I think the overall host cpu/memory
  usage is still correct.

  The fix is to reduce the indentation of the two lines in question so
  that they get called even when the instance has no numa topology. This
  writes the original host cell usage information back to it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552394] Re: auth_url contains wrong configuration for metadata_agent.ini and other neutron config

2016-06-08 Thread Bjoern Teipel
Marking neutron as affected since correcting the auth_url did not seem to fix 
this reliable enough.
We observed still issues, especially growing metadata response with a 404 
wrapped into a 401. The interesting part is that this error goes aways after 
the neutron-metadata-agent restart but ultimately comes back. We think it is 
triggered with increasing volume but could not locate when it happens, 
certainly not if the service token is expired.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552394

Title:
  auth_url contains wrong configuration for  metadata_agent.ini and
  other neutron config

Status in neutron:
  New
Status in openstack-ansible:
  Fix Released
Status in openstack-ansible liberty series:
  In Progress
Status in openstack-ansible trunk series:
  Fix Released

Bug description:
  The current configuration

  auth_url = {{ keystone_service_adminuri }}

  will lead to a incomplete URL like  http://1.2.3.4:35357 and will
  cause the neutron-metadata-agent to make bad token requests like :

  POST /tokens HTTP/1.1
  Host: 1.2.3.4:35357
  Content-Length: 91
  Accept-Encoding: gzip, deflate
  Accept: application/json
  User-Agent: python-neutronclient

  and the response is

  HTTP/1.1 404 Not Found
  Date: Tue, 01 Mar 2016 22:14:58 GMT
  Server: Apache
  Vary: X-Auth-Token
  Content-Length: 93
  Content-Type: application/json

  and the agent will stop responding with

  2016-02-26 13:34:46.478 33371 INFO eventlet.wsgi.server [-] (33371) accepted 
''
  2016-02-26 13:34:46.486 33371 ERROR neutron.agent.metadata.agent [-] 
Unexpected error.
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent Traceback 
(most recent call last):
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
109, in __call__
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
instance_id, tenant_id = self._get_instance_and_tenant_id(req)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
204, in _get_instance_and_tenant_id
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ports = 
self._get_ports(remote_address, network_id, router_id)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
197, in _get_ports
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_ports_for_remote_address(remote_address, networks)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/common/utils.py", line 101, in 
__call__
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_from_cache(target_self, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/common/utils.py", line 79, in 
_get_from_cache
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent item = 
self.func(target_self, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
166, in _get_ports_for_remote_address
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
ip_address=remote_address)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
135, in _get_ports_from_server
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent return 
self._get_ports_using_client(filters)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 
177, in _get_ports_using_client
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ports = 
client.list_ports(**filters)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent ret = 
self.function(instance, *args, **kwargs)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
534, in list_ports
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent 
**_params)
  2016-02-26 13:34:46.486 33371 TRACE neutron.agent.metadata.agent   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
307, in list
  

[Yahoo-eng-team] [Bug 1590602] [NEW] duplicate mac detection is brittle

2016-06-08 Thread Kevin Benton
Public bug reported:

The duplicate mac detection logic is based on catching DBDuplicate
exceptions at a very specific time. This results in incorrect
classification if other things are waiting to be flushed which results
in weird workarounds like
(https://review.openstack.org/#/c/292207/23/neutron/db/db_base_plugin_common.py).
It also creates different behavior if the duplicate isn't realized until
certification at COMMIT, at which point the retry decorator will restart
the operation.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590602

Title:
  duplicate mac detection is brittle

Status in neutron:
  In Progress

Bug description:
  The duplicate mac detection logic is based on catching DBDuplicate
  exceptions at a very specific time. This results in incorrect
  classification if other things are waiting to be flushed which results
  in weird workarounds like
  
(https://review.openstack.org/#/c/292207/23/neutron/db/db_base_plugin_common.py).
  It also creates different behavior if the duplicate isn't realized
  until certification at COMMIT, at which point the retry decorator will
  restart the operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571097] Re: unable to delete lbaasv2 health monitor if its listener deleted

2016-06-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/324380
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=4effc9b96ef6317cefb25768dd60bd0f5f4abac5
Submitter: Jenkins
Branch:master

commit 4effc9b96ef6317cefb25768dd60bd0f5f4abac5
Author: Evgeny Fedoruk 
Date:   Thu Jun 2 01:29:42 2016 -0700

Preventing pool deletion if pool has healthmonitor

If HM is associated to pool, HM should be deleted prior
to pool deletion.
Trying to delete pool with HM will fail with EntityInUse exception.
This is to preserve common neutron's API concept to delete cascade
resources' subresources only.

Change-Id: I1bfc4d8d8ec7e83b1de11c8fb3e66282bfd06806
Closes-Bug: 1571097


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571097

Title:
  unable to delete lbaasv2 health monitor if its listener deleted

Status in neutron:
  Fix Released

Bug description:
  problem is in Kilo neutron-lbaas branch.

  monitor is attached a pool.
  When pool and listener were deleted, not error reported that there is a 
health-monitor associated to pool.

  If all lbaas resoruces except health-monitor were deleted, health
  monitor can not be deleted.

  See the following procedure to reproduce this issue:

  $ neutron lbaas-loadbalancer-create --name=v-lb2 lb2-v-1574810802
  $ neutron lbaas-listener-create --protocol=HTTP --protocol-port=80 
--name=v-lb2-1 --loadbalancer=v-lb2
  $ neutron lbaas-pool-create --lb-algorithm=ROUND_ROBIN --protocol=HTTP 
--name=v-lb2-pool  --listener=v-lb2-1
  $ neutron lbaas-member-create --subnet lb2-v-1574810802 --address 10.199.88.3 
--protocol-port=80 v-lb2-pool
  $ neutron lbaas-member-create --subnet lb2-v-1574810802 --address 10.199.88.4 
--protocol-port=80 v-lb2-pool
  $ neutron lbaas-healthmonitor-create --max-retries=3 --delay=3 --timeout=10 
--type=HTTP --pool=v-lb2-pool
  $ neutron lbaas-member-delete 4d2977fc-5600-4dbf-8af2-35c017c8f4a0 v-lb2-pool 
  $ neutron lbaas-member-delete 2f60a49b-add1-43d6-97d8-4e53a925b25f  
v-lb2-pool 
  $ neutron lbaas-pool-delete v-lb2-pool
  $ neutron lbaas-listener-delete v-lb2-1
  $ neutron lbaashealthmonitor-delete 044f98a5-755d-498d-a38e-18eb8ca13884

  neutron log seems point to lbaas resources were gone.
  In this specific issue, we should just remove the health monitor from system.

  2016-04-10 16:57:38.220 INFO neutron.wsgi 
[req-7e697943-e70d-4ac8-a840-b1edf441806a Venus Venus] 10.34.57.68 - - 
[10/Apr/2016 16:57:38] "GET 
/v2.0/lbaas/healthmonitors.json?fields=id=044f98a5-755d-498d-a38e-18eb8ca13884
 HTTP/1.1" 200 444 0.112257
  2016-04-10 16:57:38.252 ERROR neutron.api.v2.resource 
[req-aaeae392-33b2-427c-96a0-918782882c9a Venus Venus] delete failed
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 490, in delete
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/plugin.py", line 
906, in delete_healthmonitor
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource 
constants.PENDING_DELETE)
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py", 
line 163, in test_and_set_status
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource 
db_lb_child.root_loadbalancer.id)
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/db/loadbalancer/models.py", line 115, 
in root_loadbalancer
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource return 
self.pool.listener.loadbalancer
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource AttributeError: 
'NoneType' object has no attribute 'listener'
  2016-04-10 16:57:38.252 4158 TRACE neutron.api.v2.resource 
  2016-04-10 16:57:38.253 INFO neutron.wsgi 
[req-aaeae392-33b2-427c-96a0-918782882c9a Venus Venus] 10.34.57.68 - - 
[10/Apr/2016 16:57:38] "DELETE 
/v2.0/lbaas/healthmonitors/044f98a5-755d-498d-a38e-18eb8ca13884.json HTTP/1.1" 
500 383 0.030720

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1571097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1585538] Re: Creating LBaaS pool related to LB (not listener) does not reflect the new pool to provider's driver

2016-06-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/321413
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=55be5725adbb182a260c29731c78adc0ee296bc2
Submitter: Jenkins
Branch:master

commit 55be5725adbb182a260c29731c78adc0ee296bc2
Author: Evgeny Fedoruk 
Date:   Thu May 26 01:28:30 2016 -0700

Fixing creation of shared pool

Refreshing pool's loadbalancer relationship object on DB layer
before the pool's db object is passed to a provider's driver code
to reflect newly added shared pool for a loadbalancer

Change-Id: Id4e3859448c9ebcf4e509e2f67f0723a749b8e03
Closes-Bug: 1585538


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585538

Title:
  Creating LBaaS pool related to LB (not listener) does not  reflect the
  new pool  to provider's driver

Status in neutron:
  Fix Released

Bug description:
  1. Create LB
  2. Create pool related to that LB (with --loadbalancer argument).
  3. The LB object passed as an argument to the provider's driver handling does 
not include the new pool in LB'a pools parameter.

  The problem occurs because the context is not refreshed for the pool's
  LB object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589821] Re: cleanup_incomplete_migrations periodic task regression with commit 099cf53925c0a0275325339f21932273ee9ce2bc

2016-06-08 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
   Status: New => Confirmed

** Changed in: nova/mitaka
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589821

Title:
  cleanup_incomplete_migrations periodic task regression with commit
  099cf53925c0a0275325339f21932273ee9ce2bc

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  Patch [1] changes the instance filtering condition in periodic task
  "cleanup_incomplete_migrations" introduced in [2], in such a way that
  it generates new issue, [3]

  After change [1] lands,  the condition changes filtering logic, so now
  all instances on current host are filtered, which is not expected.

  We should filter all instances where instance uuids are associated
  with migration records and those migration status is set to 'error'
  and instance is marked as deleted.

  [1] https://review.openstack.org/#/c/256102/
  [2] https://review.openstack.org/#/c/219299/
  [2] https://bugs.launchpad.net/nova/+bug/1586309

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1589821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590588] [NEW] pecan: list or get resource with single fields fails

2016-06-08 Thread Brandon Logan
Public bug reported:

stacking fails with:

$ neutron --os-cloud devstack-admin --os-region RegionOne subnet-create 
--tenant-id a213c00559414379b3f2848b01bc6544 --ip_version 4 --gateway 10.1.0.1 
--name private-subnet 2293ccce-9150-49f0-83b8-f85d2cccdf7c 10.1.0.0/20
'id'

** Affects: neutron
 Importance: Undecided
 Assignee: Brandon Logan (brandon-logan)
 Status: In Progress


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590588

Title:
  pecan: list or get resource with single fields fails

Status in neutron:
  In Progress

Bug description:
  stacking fails with:

  $ neutron --os-cloud devstack-admin --os-region RegionOne subnet-create 
--tenant-id a213c00559414379b3f2848b01bc6544 --ip_version 4 --gateway 10.1.0.1 
--name private-subnet 2293ccce-9150-49f0-83b8-f85d2cccdf7c 10.1.0.0/20
  'id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590179] Re: fernet memcache performance regression

2016-06-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326234
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=9c89e07b11afa2e12c97d0af514ce5fcc04e2ac3
Submitter: Jenkins
Branch:master

commit 9c89e07b11afa2e12c97d0af514ce5fcc04e2ac3
Author: Henry Nash 
Date:   Tue Jun 7 06:34:21 2016 +0100

Revert to caching fernet tokens the same way we do UUID

In Liberty we used to cache the whole token at the provider manager
validate token call. However, in Mitaka we changed this, for
non-persistent tokens (e.g. fernet), to instead attempt to cache
the individual components that make up the token. This change caused
validating a fernet token to become 5 times slower than the same
operation in Liberty (as well as UUID in both releases).

This patches re-instates full-token caching for fernet. This should be
considered somewhat of a bandaid to redress the performance
degredation, while we work to restructure our token issuance
and validation to simplify the multiple code paths.

In terms of invalidation of such a cache, this change effectively
reverts to the Liberty approach where anything logged to the
revokation manager will still cause validaiton of the token to fail
(this is checked for all token types). However, the alternate (and
confusingly additonal) "direct" invalidation of the cache via
the pesistance manager will, like in Liberty, not have any
effect with cached fernet tokens. As far as I can tell, all
situations where we currently want a token revoked will send
this information to both the revoke and persistance managers,
hence this change should not result in any tokens remaining
valid when they shouldn't.

Closes-Bug: #1590179
Change-Id: I80371746735edac075eec9986e89b54b66bc47cb


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590179

Title:
  fernet memcache performance regression

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  
  Fernet token validation performance got worse in mitaka vs in liberty. This 
is because it's not using memcache to cache the token anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1590179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590587] [NEW] assigning a domain-specific role in domain A for a user to a project in domain B should be prohibited

2016-06-08 Thread Guang Yee
Public bug reported:

Domain-specific roles are visible in their owning domains only.
Therefore, assigning a domain-specific role in a domain to users for a
project in another domain should be prohibited.

To reproduce:

1. create a domain-specific "foo_domain_role" in the "foo" domain.
2. create a project "bar_project" in "bar" domain.
3. create a user "bar_user" in "bar" domain.
4. now assign the role "foo_domain_role" to user "bar_user" for "bar_project", 
this should yield 403 instead of 201.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590587

Title:
  assigning a domain-specific role in domain A for a user to a project
  in domain B should be prohibited

Status in OpenStack Identity (keystone):
  New

Bug description:
  Domain-specific roles are visible in their owning domains only.
  Therefore, assigning a domain-specific role in a domain to users for a
  project in another domain should be prohibited.

  To reproduce:

  1. create a domain-specific "foo_domain_role" in the "foo" domain.
  2. create a project "bar_project" in "bar" domain.
  3. create a user "bar_user" in "bar" domain.
  4. now assign the role "foo_domain_role" to user "bar_user" for 
"bar_project", this should yield 403 instead of 201.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1590587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503501] Re: oslo.db no longer requires testresources and testscenarios packages

2016-06-08 Thread Vitaly Gridnev
** Changed in: sahara/liberty
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503501

Title:
  oslo.db no longer requires testresources and testscenarios packages

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in heat liberty series:
  Fix Released
Status in Ironic:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in Sahara:
  Fix Released
Status in Sahara liberty series:
  Fix Released
Status in Sahara mitaka series:
  Fix Released

Bug description:
  As of https://review.openstack.org/#/c/217347/ oslo.db no longer has
  testresources or testscenarios in its requirements, So next release of
  oslo.db will break several projects. These project that use fixtures
  from oslo.db should add these to their requirements if they need it.

  Example from Nova:
  ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} 
--list 
  ---Non-zero exit code (2) from test listing.
  error: testr failed (3) 
  import errors ---
  Failed to import test module: nova.tests.unit.db.test_db_api
  Traceback (most recent call last):
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "nova/tests/unit/db/test_db_api.py", line 31, in 
  from oslo_db.sqlalchemy import test_base
File 
"/home/travis/build/dims/nova/.tox/py27/src/oslo.db/oslo_db/sqlalchemy/test_base.py",
 line 17, in 
  import testresources
  ImportError: No module named testresources

  https://travis-ci.org/dims/nova/jobs/83992423

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1503501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590583] [NEW] domain-specific role in one domain should not be able to imply a domain-specific role from another domain

2016-06-08 Thread Guang Yee
Public bug reported:

By design, domain-specific roles are visible within their owning domains
only. In other words, domain-specific role in domain "foo" should not be
able to imply a domain-specific role from domain "bar".

To reproduce:

1. create a domain-specific role "foo_domain_role" in domain "foo".
2. create a domain-specific role "bar_domain_role" in domain "bar".
3. PUT /v3/roles//implies/
4. list implies for "foo_domain_role" and you'll see "bar_domain_role" on the 
list

vagrant@vagrant-ubuntu-trusty-64:~$ curl -s -H 'X-Auth-Token: 
748aa5d5c13c4df2b8d6fb2075ca4c39' 
http://10.0.2.15:5000/v3/roles/306b6d6f97084df983a6f2fa30cf1163/implies | 
python -mjson.tool
{
"role_inference": {
"implies": [
{
"id": "3171089626224021afc0299a0c9b916e",
"links": {
"self": 
"http://10.0.2.15/identity/v3/roles/3171089626224021afc0299a0c9b916e;
},
"name": "bar_domain_role"
}
],
"prior_role": {
"id": "306b6d6f97084df983a6f2fa30cf1163",
"links": {
"self": 
"http://10.0.2.15/identity/v3/roles/306b6d6f97084df983a6f2fa30cf1163;
},
"name": "foo_domain_role"
}
}
}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -s -H 'X-Auth-Token: 
748aa5d5c13c4df2b8d6fb2075ca4c39' 
http://10.0.2.15:5000/v3/roles/306b6d6f97084df983a6f2fa30cf1163 | python 
-mjson.tool
{
"role": {
"domain_id": "0ba1cc88be31429d98866d101d1ed0ba",
"id": "306b6d6f97084df983a6f2fa30cf1163",
"links": {
"self": 
"http://10.0.2.15/identity/v3/roles/306b6d6f97084df983a6f2fa30cf1163;
},
"name": "foo_domain_role"
}
}

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590583

Title:
  domain-specific role in one domain should not be able to imply a
  domain-specific role from another domain

Status in OpenStack Identity (keystone):
  New

Bug description:
  By design, domain-specific roles are visible within their owning
  domains only. In other words, domain-specific role in domain "foo"
  should not be able to imply a domain-specific role from domain "bar".

  To reproduce:

  1. create a domain-specific role "foo_domain_role" in domain "foo".
  2. create a domain-specific role "bar_domain_role" in domain "bar".
  3. PUT /v3/roles//implies/
  4. list implies for "foo_domain_role" and you'll see "bar_domain_role" on the 
list

  vagrant@vagrant-ubuntu-trusty-64:~$ curl -s -H 'X-Auth-Token: 
748aa5d5c13c4df2b8d6fb2075ca4c39' 
http://10.0.2.15:5000/v3/roles/306b6d6f97084df983a6f2fa30cf1163/implies | 
python -mjson.tool
  {
  "role_inference": {
  "implies": [
  {
  "id": "3171089626224021afc0299a0c9b916e",
  "links": {
  "self": 
"http://10.0.2.15/identity/v3/roles/3171089626224021afc0299a0c9b916e;
  },
  "name": "bar_domain_role"
  }
  ],
  "prior_role": {
  "id": "306b6d6f97084df983a6f2fa30cf1163",
  "links": {
  "self": 
"http://10.0.2.15/identity/v3/roles/306b6d6f97084df983a6f2fa30cf1163;
  },
  "name": "foo_domain_role"
  }
  }
  }
  vagrant@vagrant-ubuntu-trusty-64:~$ curl -s -H 'X-Auth-Token: 
748aa5d5c13c4df2b8d6fb2075ca4c39' 
http://10.0.2.15:5000/v3/roles/306b6d6f97084df983a6f2fa30cf1163 | python 
-mjson.tool
  {
  "role": {
  "domain_id": "0ba1cc88be31429d98866d101d1ed0ba",
  "id": "306b6d6f97084df983a6f2fa30cf1163",
  "links": {
  "self": 
"http://10.0.2.15/identity/v3/roles/306b6d6f97084df983a6f2fa30cf1163;
  },
  "name": "foo_domain_role"
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1590583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590584] [NEW] ldap delete_user fails to cleanup all group membership

2016-06-08 Thread Matthew Edmonds
Public bug reported:

When an LDAP user is deleted, keystone removes it from groups that match
the group_filter conf setting, but fails to remove it from any other
groups. It should remove it from all groups.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590584

Title:
  ldap delete_user fails to cleanup all group membership

Status in OpenStack Identity (keystone):
  New

Bug description:
  When an LDAP user is deleted, keystone removes it from groups that
  match the group_filter conf setting, but fails to remove it from any
  other groups. It should remove it from all groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1590584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590578] [NEW] global role should not be able to imply domain-specific role

2016-06-08 Thread Guang Yee
Public bug reported:

Global roles should only be able to imply other global roles, it should
not be able to imply domain-specific roles. Domain-specific role
visibility should be limited to its owning domain only.

To reproduce:

1. create a domain-specific role "foo_domain_role" in domain "foo".
2. create a global role "foo_admin".
3. PUT /v3/roles//implies/
4. list imply roles for "foo_admin" and you'll see the imply relationship 

vagrant@vagrant-ubuntu-trusty-64:~$ curl -s -H 'X-Auth-Token: 
748aa5d5c13c4df2b8d6fb2075ca4c39' 
http://10.0.2.15:5000/v3/roles/45038d5e628b44c1857f33e839b06c77/implies | 
python -mjson.tool
{
"role_inference": {
"implies": [
{
"id": "306b6d6f97084df983a6f2fa30cf1163",
"links": {
"self": 
"http://10.0.2.15/identity/v3/roles/306b6d6f97084df983a6f2fa30cf1163;
},
"name": "foo_domain_role"
},
{
"id": "c256b7047f514515b3138d9efb594b21",
"links": {
"self": 
"http://10.0.2.15/identity/v3/roles/c256b7047f514515b3138d9efb594b21;
},
"name": "bar_admin"
}
],
"prior_role": {
"id": "45038d5e628b44c1857f33e839b06c77",
"links": {
"self": 
"http://10.0.2.15/identity/v3/roles/45038d5e628b44c1857f33e839b06c77;
},
"name": "foo_admin"
}
}
}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -s -H 'X-Auth-Token: 
748aa5d5c13c4df2b8d6fb2075ca4c39' 
http://10.0.2.15:5000/v3/roles/45038d5e628b44c1857f33e839b06c77 | python 
-mjson.tool
{
"role": {
"domain_id": null,
"id": "45038d5e628b44c1857f33e839b06c77",
"links": {
"self": 
"http://10.0.2.15/identity/v3/roles/45038d5e628b44c1857f33e839b06c77;
},
"name": "foo_admin"
}
}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -s -H 'X-Auth-Token: 
748aa5d5c13c4df2b8d6fb2075ca4c39' 
http://10.0.2.15:5000/v3/roles/306b6d6f97084df983a6f2fa30cf1163 | python 
-mjson.tool
{
"role": {
"domain_id": "0ba1cc88be31429d98866d101d1ed0ba",
"id": "306b6d6f97084df983a6f2fa30cf1163",
"links": {
"self": 
"http://10.0.2.15/identity/v3/roles/306b6d6f97084df983a6f2fa30cf1163;
},
"name": "foo_domain_role"
}
}

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590578

Title:
  global role should not be able to imply domain-specific role

Status in OpenStack Identity (keystone):
  New

Bug description:
  Global roles should only be able to imply other global roles, it
  should not be able to imply domain-specific roles. Domain-specific
  role visibility should be limited to its owning domain only.

  To reproduce:

  1. create a domain-specific role "foo_domain_role" in domain "foo".
  2. create a global role "foo_admin".
  3. PUT /v3/roles//implies/
  4. list imply roles for "foo_admin" and you'll see the imply relationship 

  vagrant@vagrant-ubuntu-trusty-64:~$ curl -s -H 'X-Auth-Token: 
748aa5d5c13c4df2b8d6fb2075ca4c39' 
http://10.0.2.15:5000/v3/roles/45038d5e628b44c1857f33e839b06c77/implies | 
python -mjson.tool
  {
  "role_inference": {
  "implies": [
  {
  "id": "306b6d6f97084df983a6f2fa30cf1163",
  "links": {
  "self": 
"http://10.0.2.15/identity/v3/roles/306b6d6f97084df983a6f2fa30cf1163;
  },
  "name": "foo_domain_role"
  },
  {
  "id": "c256b7047f514515b3138d9efb594b21",
  "links": {
  "self": 
"http://10.0.2.15/identity/v3/roles/c256b7047f514515b3138d9efb594b21;
  },
  "name": "bar_admin"
  }
  ],
  "prior_role": {
  "id": "45038d5e628b44c1857f33e839b06c77",
  "links": {
  "self": 
"http://10.0.2.15/identity/v3/roles/45038d5e628b44c1857f33e839b06c77;
  },
  "name": "foo_admin"
  }
  }
  }
  vagrant@vagrant-ubuntu-trusty-64:~$ curl -s -H 'X-Auth-Token: 
748aa5d5c13c4df2b8d6fb2075ca4c39' 
http://10.0.2.15:5000/v3/roles/45038d5e628b44c1857f33e839b06c77 | python 
-mjson.tool
  {
  "role": {
  "domain_id": null,
  "id": "45038d5e628b44c1857f33e839b06c77",
  "links": {
  "self": 
"http://10.0.2.15/identity/v3/roles/45038d5e628b44c1857f33e839b06c77;
  },
  "name": "foo_admin"
  }
  }
  vagrant@vagrant-ubuntu-trusty-64:~$ curl -s -H 'X-Auth-Token: 
748aa5d5c13c4df2b8d6fb2075ca4c39' 
http://10.0.2.15:5000/v3/roles/306b6d6f97084df983a6f2fa30cf1163 | python 
-mjson.tool
  {
  "role": {
   

[Yahoo-eng-team] [Bug 1590576] [NEW] Killing parent glance api process doesn't cleanup child workers

2016-06-08 Thread Pooja Ghumre
Public bug reported:

The glance-api process doesn't handle SIGKILL well as the workers
spawned by the parent api process are left running and never cleaned up.
Looks like the glance service needs to use the oslo.service model as
used by other openstack components since Liberty release. Killing the
parent process should clean up any workers spawned.

pooja [~] # ps -ef | grep glance-api
root 17575 14543 40 12:40 pts/000:00:01 /opt/pf9/glance/bin/python 
/usr/bin/glance-api
root 17584 17575  0 12:40 pts/000:00:00 /opt/pf9/glance/bin/python 
/usr/bin/glance-api
root 17585 17575  0 12:40 pts/000:00:00 /opt/pf9/glance/bin/python 
/usr/bin/glance-api

pooja [~] # kill -9 17575
pooja [~] # ps -ef | grep glance-api
root 17584 1  0 12:40 pts/000:00:00 /opt/pf9/glance/bin/python 
/usr/bin/glance-api
root 17585 1  0 12:40 pts/000:00:00 /opt/pf9/glance/bin/python 
/usr/bin/glance-api

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1590576

Title:
  Killing parent glance api process doesn't cleanup child workers

Status in Glance:
  New

Bug description:
  The glance-api process doesn't handle SIGKILL well as the workers
  spawned by the parent api process are left running and never cleaned
  up. Looks like the glance service needs to use the oslo.service model
  as used by other openstack components since Liberty release. Killing
  the parent process should clean up any workers spawned.

  pooja [~] # ps -ef | grep glance-api
  root 17575 14543 40 12:40 pts/000:00:01 /opt/pf9/glance/bin/python 
/usr/bin/glance-api
  root 17584 17575  0 12:40 pts/000:00:00 /opt/pf9/glance/bin/python 
/usr/bin/glance-api
  root 17585 17575  0 12:40 pts/000:00:00 /opt/pf9/glance/bin/python 
/usr/bin/glance-api

  pooja [~] # kill -9 17575
  pooja [~] # ps -ef | grep glance-api
  root 17584 1  0 12:40 pts/000:00:00 /opt/pf9/glance/bin/python 
/usr/bin/glance-api
  root 17585 1  0 12:40 pts/000:00:00 /opt/pf9/glance/bin/python 
/usr/bin/glance-api

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1590576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590556] [NEW] race condition with resize causing old resources not to be free

2016-06-08 Thread Moshe Levi
Public bug reported:

While I was working on fixing the resize for pci passthrough [1] I have
notice the following issue in resize.


If you are using small image and you resize-confirm it very fast the old
resources are not getting freed.


After debug this issue I found out the root cause of it.


A Good run of resize is as detailed below:


When doing resize the _update_usage_from_migration in the resource
trucker called twice.

1.   The first call we return  the instance type of the new flavor
and will enter this case

https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L718

2.   Then it will put in the tracked_migrations the migration and
the new instance_type

https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763

3.   The second call we return the old  instance_type and will enter
this case

https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L725

4.   Then in the tracked_migrations it will overwrite  the old value
with migration and the old instance type

5.
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763

6.   When doing resize-confirm the drop_move_claim called with the
old instance type

https://github.com/openstack/nova/blob/9a05d38f48ef0f630c5e49e332075b273cee38b9/nova/compute/manager.py#L3369

7.   The drop_move_claim will compare the instance_type[id] from the
tracked_migrations to the instance_type.id (which is the old one)

8.   And because they are equals it will  remove the old resource
usage

https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L315-L328


But with small image like CirrOS   and doing the revert-confirm fast the
second call of _update_usage_from_migration will not get executing.

The result is that when we enter the drop_move_claim it compares it with
the new instance_type and this  expression is false
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L314

This mean that this code block is not executed
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L315-L326
and therefore old resources are not getting freed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590556

Title:
  race condition with resize causing old resources not to be  free

Status in OpenStack Compute (nova):
  New

Bug description:
  While I was working on fixing the resize for pci passthrough [1] I
  have notice the following issue in resize.


  If you are using small image and you resize-confirm it very fast the
  old resources are not getting freed.


  After debug this issue I found out the root cause of it.


  A Good run of resize is as detailed below:


  When doing resize the _update_usage_from_migration in the resource
  trucker called twice.

  1.   The first call we return  the instance type of the new flavor
  and will enter this case

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L718

  2.   Then it will put in the tracked_migrations the migration and
  the new instance_type

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763

  3.   The second call we return the old  instance_type and will
  enter this case

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L725

  4.   Then in the tracked_migrations it will overwrite  the old
  value with migration and the old instance type

  5.
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763

  6.   When doing resize-confirm the drop_move_claim called with the
  old instance type

  
https://github.com/openstack/nova/blob/9a05d38f48ef0f630c5e49e332075b273cee38b9/nova/compute/manager.py#L3369

  7.   The drop_move_claim will compare the instance_type[id] from
  the tracked_migrations to the instance_type.id (which is the old one)

  8.   And because they are equals it will  remove the old resource
  usage

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L315-L328


  But with small image like CirrOS   and doing the revert-confirm fast
  the second call of _update_usage_from_migration will not get
  executing.

  The result is that when we enter the drop_move_claim it compares it
  with the new instance_type and this  expression is false
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L314

  This mean that this code block is not executed
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L315-L326
  and therefore old resources are not getting freed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590556/+subscriptions

-- 
Mailing 

[Yahoo-eng-team] [Bug 1508571] Re: Overview panels use too wide date range as default

2016-06-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/238204
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=5da5fd3196acd26a0a778877bfb683a1e59867f1
Submitter: Jenkins
Branch:master

commit 5da5fd3196acd26a0a778877bfb683a1e59867f1
Author: Timur Sufiev 
Date:   Wed Oct 21 21:31:28 2015 +0300

Reduce the default date range on Overview panel to 1 day

First, the default default date range used on Overview panel is made
configurable (setting OVERVIEW_DAYS_RANGE). Second, its default value
is set to 1 day. Changing the default behavior is aimed to improve
load time of the default page in the presence of large amounts of
data. If OVERVIEW_DAYS_RANGE setting is explicitly set to None, the
behavior remains the same - the default date range is from the
beginning of the current month until today.

Co-Authored-By: Dmitry Sutyagin 
Change-Id: I55a0397f69e33ba9c8fb1f27d57838efcd8648af
Closes-Bug: #1508571


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1508571

Title:
  Overview panels use too wide date range as default

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Standard default date range for Overview panels (Project and Admin) is
  starting from first day of current month till today. This default
  causes long response times on environments with a lot data to crunch,
  which is worsened by the fact that Users are always redirected to
  Overview panel by default (even if they don't want to see the Usage
  Stats).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1508571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589993] Re: Murano cannot deploy with federated user

2016-06-08 Thread Dolph Mathews
I imagine this will be addressed by (or nearly addressed by) having
concrete role assignments for federated users in keystone:
https://review.openstack.org/#/c/284943/

** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
 Assignee: (unassigned) => Ron De Rose (ronald-de-rose)

** Changed in: keystone
   Importance: Undecided => Wishlist

** Changed in: keystone
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1589993

Title:
  Murano cannot deploy with federated user

Status in OpenStack Identity (keystone):
  Triaged
Status in Murano:
  New

Bug description:
  Deploying with federated user throws an exception in murano-engine
  with:

  Exception Could not find role: 9fe2ff9ee4384b1894a90878d3e92bab (HTTP
  404)

  The mentioned role is _member_

  The full trace:

  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/murano/common/engine.py", line 159, in execute
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine 
self._create_trust()
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/murano/common/engine.py", line 282, in 
_create_trust
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine 
self._session.token, self._session.project_id)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/murano/common/auth_utils.py", line 98, in 
create_trust
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine project=project)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/v3/contrib/trusts.py", line 
75, in create
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 75, in func
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine return f(*args, 
**new_kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 339, in create
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine self.key)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 171, in _post
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine resp, body = 
self.client.post(url, body=body, **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 179, in post
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine return 
self.request(url, 'POST', **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 331, in 
request
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine resp = 
super(LegacyJsonAdapter, self).request(*args, **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 98, in request
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine return 
self.session.request(url, method, **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/positional/__init__.py", line 94, in inner
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine return 
func(*args, **kwargs)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line 420, in 
request
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine raise 
exceptions.from_response(resp, method, url)
  2016-06-07 15:08:05.732 8194 ERROR murano.common.engine NotFound: Could not 
find role: 9fe2ff9ee4384b1894a90878d3e92bab (HTTP 404) (Request-ID: 
req-760d033b-e456-4915-b197-e450d4c8a405)

  
  Seems something wrong with creating a trust.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1589993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575661] Re: can not deploy a partition image to Ironic node

2016-06-08 Thread Pavlo Shchelokovskyy
Lucas,

I was using the agent_ipmitool driver in Ironic, and ubuntu images.


Anyway, I can no longer reproduce this bug on latest master, so please close as 
invalid. Feel free to reopen if it resurfaces.

** Changed in: ironic
   Status: Incomplete => Invalid

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1575661

Title:
  can not deploy a partition image to Ironic node

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Using fresh master of DevStack, I can not deploy partition images to
  Ironic nodes via Nova.

  I have two images in Glance - kernel image and partition image with
  kernel_id property set.

  I have configured Ironic nodes and nova flavor with capabilities:
  "boot_option: local" as described in [0].

  When I try to boot nova instance with the partition image and the
  configured flavor, instance goes to error:

  $openstack server list
  +--+++--+
  | ID   | Name   | Status | Networks |
  +--+++--+
  | 6cde85d2-47ad-446b-9a1f-960dbcca5199 | parted | ERROR  |  |
  +--+++--+

  Instance is assigned to Ironic node but node is not moved to deploying
  state

  $openstack baremetal list
  
+--++--+-++-+
  | UUID | Name   | Instance UUID   
 | Power State | Provisioning State | Maintenance |
  
+--++--+-++-+
  | 95d3353f-61a6-44ba-8485-2881d1138ce1 | node-0 | None
 | power off   | available  | False   |
  | 48112a56-8f8b-42fc-b143-742cf4856e78 | node-1 | 
6cde85d2-47ad-446b-9a1f-960dbcca5199 | power off   | available  | False 
  |
  | c66a1035-5edf-434b-9d09-39ecc9069e02 | node-2 | None
 | power off   | available  | False   |
  
+--++--+-++-+

  In n-cpu.log I see the following errors:

  2016-04-27 15:26:13.190 ERROR ironicclient.common.http 
[req-077efca4-1776-443b-bd70-0769c09a0e54 demo demo] Error contacting Ironic 
server: Instance 6cde85d2-47ad-446b-9a1f-960dbcca5199 is already associated with
   a node, it cannot be associated with this other node 
c66a1035-5edf-434b-9d09-39ecc9069e02 (HTTP 409). Attempt 2 of 2
  2016-04-27 15:26:13.190 ERROR nova.compute.manager 
[req-077efca4-1776-443b-bd70-0769c09a0e54 demo demo] [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] Instance failed to spawn
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] Traceback (most recent call last):
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2209, in _build_resources
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] yield resources
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2055, in _build_and_run_instance
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] block_device_info=block_device_info)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/nova/nova/virt/ironic/driver.py", line 698, in spawn
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] self._add_driver_fields(node, 
instance, image_meta, flavor)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/nova/nova/virt/ironic/driver.py", line 366, in _add_driver_fields
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] retry_on_conflict=False)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/nova/nova/virt/ironic/client_wrapper.py", line 139, in call
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] return self._multi_getattr(client, 
method)(*args, **kwargs)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1587070] Re: adminPass is missing from the parameter list of Change administrative password

2016-06-08 Thread Gergely Csatari
** Project changed: openstack-api-site => nova

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1587070

Title:
  adminPass is missing from the parameter list of Change administrative
  password

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  According to the example Change administrative password has a parameter 
called "adminPass", which is quite logical when we are changing a password. 
  This "adminPass" parameter is not listed in the Parameter list of Change 
administrative password in the web API reference [1] and not mentioned in 
Chapter 4.4.3 of the pdf API reference [2].
  [1]: http://developer.openstack.org/api-ref-compute-v2.1.html#changePassword
  [2]: http://api.openstack.org/api-ref-guides/bk-api-ref.pdf

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1587070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587070] [NEW] adminPass is missing from the parameter list of Change administrative password

2016-06-08 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

According to the example Change administrative password has a parameter called 
"adminPass", which is quite logical when we are changing a password. 
This "adminPass" parameter is not listed in the Parameter list of Change 
administrative password in the web API reference [1] and not mentioned in 
Chapter 4.4.3 of the pdf API reference [2].
[1]: http://developer.openstack.org/api-ref-compute-v2.1.html#changePassword
[2]: http://api.openstack.org/api-ref-guides/bk-api-ref.pdf

** Affects: nova
 Importance: Undecided
 Assignee: Gergely Csatari (gergely-csatari)
 Status: New

-- 
adminPass is missing from the parameter list of Change administrative password
https://bugs.launchpad.net/bugs/1587070
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590426] [NEW] Keystone Federated Identity assertion name not included in token

2016-06-08 Thread Alessandro Pilotti
Public bug reported:

When using keystone Federated Identitity, the user name, based on the
assertion mapping, is replaced in Keystone tokens by the autogenerated
ID, resulting in e.g. Horizon showing the user's ID instead of the name
(see attachment).

Running "openstack user list" shows the correct data:

+--+--+
| ID   | Name |
+--+--+
| 1835f12340674587b8e9b55ac1b43a3c | te...@acme.com   |
+--+--+

The issue is clearly visible in the logs:

016-05-26 10:08:02.809220 DEBUG:keystoneauth.identity.v3.base:{"token":
{"issued_at": "2016-05-26T10:08:02.804697Z", "user": {"OS-FEDERATION":
{"identity_provider": {"id": "idp_1"}, "protocol": {"id": "saml2"},
"groups": [{"id": "b07974d2891f4d939b91a288ea933b1e"}]}, "domain":
{"id": "Federated", "name": "Federated"}, "id":
"1835f12340674587b8e9b55ac1b43a3c", "name":
"1835f12340674587b8e9b55ac1b43a3c"}, "methods": ["token"], "expires_at":
"2016-05-26T11:08:02.804676Z", "audit_ids": ["4O86fwqsSd6LSge4123sdx"]}}

** Affects: keystone
 Importance: Undecided
 Status: New

** Attachment added: "Horizon showing the data coming"
   
https://bugs.launchpad.net/bugs/1590426/+attachment/4679819/+files/keystone_federated_horizon_issue.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1590426

Title:
  Keystone Federated Identity assertion name not included in token

Status in OpenStack Identity (keystone):
  New

Bug description:
  When using keystone Federated Identitity, the user name, based on the
  assertion mapping, is replaced in Keystone tokens by the autogenerated
  ID, resulting in e.g. Horizon showing the user's ID instead of the
  name (see attachment).

  Running "openstack user list" shows the correct data:

  +--+--+
  | ID   | Name |
  +--+--+
  | 1835f12340674587b8e9b55ac1b43a3c | te...@acme.com   |
  +--+--+

  The issue is clearly visible in the logs:

  016-05-26 10:08:02.809220
  DEBUG:keystoneauth.identity.v3.base:{"token": {"issued_at":
  "2016-05-26T10:08:02.804697Z", "user": {"OS-FEDERATION":
  {"identity_provider": {"id": "idp_1"}, "protocol": {"id": "saml2"},
  "groups": [{"id": "b07974d2891f4d939b91a288ea933b1e"}]}, "domain":
  {"id": "Federated", "name": "Federated"}, "id":
  "1835f12340674587b8e9b55ac1b43a3c", "name":
  "1835f12340674587b8e9b55ac1b43a3c"}, "methods": ["token"],
  "expires_at": "2016-05-26T11:08:02.804676Z", "audit_ids":
  ["4O86fwqsSd6LSge4123sdx"]}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1590426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369465] Re: nova resize doesn't resize(extend) rbd disk files when using rbd disk backend

2016-06-08 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu)
 Assignee: Liang Chen (cbjchen) => Edward Hope-Morley (hopem)

** Changed in: nova (Ubuntu)
   Status: In Progress => New

** Changed in: nova (Ubuntu)
 Assignee: Edward Hope-Morley (hopem) => (unassigned)

** Changed in: nova (Ubuntu)
   Status: New => Fix Released

** Summary changed:

- nova resize doesn't resize(extend) rbd disk files when using rbd disk backend
+ [SRU] nova resize doesn't resize(extend) rbd disk files when using rbd disk 
backend

** Tags removed: cts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369465

Title:
  [SRU] nova resize doesn't resize(extend) rbd disk files when using rbd
  disk backend

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  [Impact]

   * Not able to resize rbd backed disk image.

  [Test Case]

  1 - boot an instance with rbd backed disk image
  2 - resize it
  3 - log into the VM
  4 - the disk is not enlarged without this patch

  [Regression Potential]

   * None

  
  tested with nova trunk commit eb860c2f219b79e4f4c5984415ee433145197570

  Configured Nova to use rbd disk backend

  nova.conf

  [libvirt]
  images_type=rbd

  instances booted successfully and instance disks are in rbd pools,
  when perform a nova resize  to an existing instance,  memory and CPU
  changed to be new flavors but instance disks size doesn't change

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1369465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569404] Re: Remove threading before process forking

2016-06-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/313277
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=483c5982c020ff21ceecf1d575c2d8fad2937d6e
Submitter: Jenkins
Branch:master

commit 483c5982c020ff21ceecf1d575c2d8fad2937d6e
Author: Dmitriy Ukhlov 
Date:   Fri May 6 08:41:07 2016 +

Revert "Revert "Remove threading before process forking""

This reverts commit b1cdba1696f5d4ec71d37a773501bd4f9e0cddb9

Original patch was reverted because it broke neutron plugin's
backward compatibility and needed more work.

This patch fixes that problems:
1) original behaviour of add_agent_status_check,
   start_periodic_l3_agent_status_check and
   start_periodic_dhcp_agent_status_check methods is deprecated but kept
   for using in third part plugins for backward compatibility
2) new add_agent_status_check_worker, add_periodic_l3_agent_status_check
   and add_periodic_dhcp_agent_status_check method are implemented
   instead and are used for implementing plugins in neutron codebase

Closes-Bug: #1569404

Change-Id: I3a32a95489831f0d862930384309eefdc881d8f6


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569404

Title:
  Remove threading before process forking

Status in neutron:
  Fix Released

Bug description:
   Forking processes when a few threads are already running is
  potentially unsafe operation and could cause a lot of problems because
  only current thread will continue working in child thread. Any locked
  by other thread resource will remain locked forever.

  We faced with this problem during oslo.messaging development and added
  workaround to hide this problem:
  https://review.openstack.org/#/c/274255/ I tried to fix this problem
  in oslo.service: https://review.openstack.org/#/c/270832/ but oslo
  folks said that this fix is ugly and it is wrong way to add
  workarounds to common libraries because projects use them incorrectly.
  I think that is fare.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551805] Re: add arp_responder flag to linuxbridge agent

2016-06-08 Thread KATO Tomoyuki
** Changed in: openstack-manuals
 Assignee: Anseela M M (anseela-m00) => (unassigned)

** Changed in: openstack-manuals
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551805

Title:
  add arp_responder flag to linuxbridge agent

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/278597
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit bbd881f3a970143e1954cb277e523526c5d0
  Author: Mark McClain 
  Date:   Wed Feb 10 13:28:21 2016 -0500

  add arp_responder flag to linuxbridge agent
  
  When the ARP responder is enabled, secondary IP addresses explicitly
  allowed by via the allowed-address-pairs extensions do not resolve.
  This change adds the ability to enable the local ARP responder similar
  to the feature in the OVS agent.  This change disables local ARP
  responses by default, so ARP traffic will be sent over the overlay.
  
  DocImpact
  UpgradeImpact
  
  Change-Id: I5da4afa44fc94032880ea59ec574df504470fb4a
  Closes-Bug: 1445089

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590397] [NEW] MTU setting too low when mixing Vlan and Vxlan

2016-06-08 Thread Dr. Jens Rosenboom
Public bug reported:

When booting an instance on a network with encapsulation type vxlan (and
thus automatically set MTU to 1450), this will also lower the MTU of the
integration bridge to that value:

$ ip link show br-int
6: br-int:  mtu 1450 qdisc noqueue state UNKNOWN mode 
DEFAULT group default 
link/ether 7e:39:32:87:27:44 brd ff:ff:ff:ff:ff:ff

In turn, all newly created ports for routers, DHCP agents and the like,
will also be created with MTU 1450, as they are forked off br-int.

However, this will be incorrect if there are other network types in use
at the same time. So if I boot an instance on a network with type vlan,
the instance interface will still have MTU 1500, but the interface for
its router will have MTU 1450, leading to possible errors.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590397

Title:
  MTU setting too low when mixing Vlan and Vxlan

Status in neutron:
  New

Bug description:
  When booting an instance on a network with encapsulation type vxlan
  (and thus automatically set MTU to 1450), this will also lower the MTU
  of the integration bridge to that value:

  $ ip link show br-int
  6: br-int:  mtu 1450 qdisc noqueue state UNKNOWN mode 
DEFAULT group default 
  link/ether 7e:39:32:87:27:44 brd ff:ff:ff:ff:ff:ff

  In turn, all newly created ports for routers, DHCP agents and the
  like, will also be created with MTU 1450, as they are forked off br-
  int.

  However, this will be incorrect if there are other network types in
  use at the same time. So if I boot an instance on a network with type
  vlan, the instance interface will still have MTU 1500, but the
  interface for its router will have MTU 1450, leading to possible
  errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557407] Re: macvtap: add devstack support for macvtap agent

2016-06-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/303455
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8e5623d624a9855fe842baca3cf9b11948113592
Submitter: Jenkins
Branch:master

commit 8e5623d624a9855fe842baca3cf9b11948113592
Author: Andreas Scheuring 
Date:   Fri Apr 8 13:25:52 2016 +0200

Devstack support for macvtap agent

Macvtap agent can now be configured via this devstack.
Note that it is only supported in multinode environments
as compute node. The controller node still needs to run
linuxbridge or ovs.

Documentation will be added in devstack via [1]

[1] https://review.openstack.org/292778

Example:

OVS Controller
--
Make sure that the controller
- loads the macvtap ml2 driver
- uses vlan or flat networking

Macvtap Compute Node local.conf
---
[[local|localrc]]
SERVICE_HOST=1.2.3.4
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
disable_all_services
enable_plugin neutron git://git.openstack.org/openstack/neutron
enable_service n-cpu
enable_service q-agt
Q_AGENT=macvtap
PHYSICAL_NETWORK=default
[[post-config|/$Q_PLUGIN_CONF_FILE]]
[macvtap]
physical_interface_mappings = $PHYSICAL_NETWORK:eth1

Closes-Bug: #1557407
Change-Id: I0dd4c0d34d5f1c35b397e5e392ce107fb984b0ba


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557407

Title:
  macvtap: add devstack support for macvtap agent

Status in neutron:
  Fix Released

Bug description:
  The Macvtap agent that was introduced in Mitaka [1] requires some devstack 
support. As only compute attachments are supported (at the moment), the 
devstack support will be restricted to
  - Single Nodes without l3 & dhcp agent
  - Multi Nodes running ovs or lb on the controller/network node

  
  [1] https://bugs.launchpad.net/neutron/+bug/1480979

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590091] Re: bug in handling of ISOLATE thread policy

2016-06-08 Thread Stephen Finucane
*** This bug is a duplicate of bug 1550317 ***
https://bugs.launchpad.net/bugs/1550317

** This bug has been marked a duplicate of bug 1550317
   'hw:cpu_thread_policy=isolate' does not schedule on non-HT hosts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590091

Title:
  bug in handling of ISOLATE thread policy

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm running stable/mitaka in devstack.  I've got a small system with 2
  pCPUs, both marked as available for pinning.  They're two cores of a
  single processor, no threads.  "virsh capabilities" shows:


  
  


  It is my understanding that I should be able to boot up an instance
  with two dedicated CPUs and a thread policy of ISOLATE, since I have
  two physical cores and no threads.  (Is this correct?)

  Unfortunately, the NUMATopology filter fails my host.  The problem is
  in _pack_instance_onto_cores():

  if (instance_cell.cpu_thread_policy ==
  fields.CPUThreadAllocationPolicy.ISOLATE):
  # make sure we have at least one fully free core
  if threads_per_core not in sibling_sets:
  return

  pinning = _get_pinning(1,  # we only want to "use" one thread per core
 sibling_sets[threads_per_core],
 instance_cell.cpuset)

  
  Right before the call to _get_pinning() we have the following:

  (Pdb) instance_cell.cpu_thread_policy
  u'isolate'
  (Pdb) threads_per_core
  1
  (Pdb) sibling_sets 
  defaultdict(, {1: [CoercedSet([0, 1])], 2: [CoercedSet([0, 1])]})
  (Pdb) sibling_sets[threads_per_core]
  [CoercedSet([0, 1])]
  (Pdb) instance_cell.cpuset
  CoercedSet([0, 1])

  In this code snippet, _get_pinning() returns None, causing the filter
  to fail the host.  Tracing a bit further in, in _get_pinning() we have
  the following line:

  if threads_no * len(sibling_set) < len(instance_cores):
  return

  Coming into this line of code the variables look like this:

  (Pdb) threads_no
  1
  (Pdb) sibling_set
  [CoercedSet([0, 1])]
  (Pdb) len(sibling_set)
  1
  (Pdb) instance_cores
  CoercedSet([0, 1])
  (Pdb) len(instance_cores)
  2

  So the test evaluates to True, and we bail out.

  I don't think this is correct, we should be able to schedule on this
  host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585092] Re: video capture of failed integration tests

2016-06-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/320004
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=d63f93a819e761a2be9c74112c9a55e4aa392e8e
Submitter: Jenkins
Branch:master

commit d63f93a819e761a2be9c74112c9a55e4aa392e8e
Author: Sergei Chipiga 
Date:   Mon May 23 17:05:00 2016 +0300

Implement video capture for failed tests

Example of video recording Idd218e09c0f8df8ec7740173d5f2d856b8baafa1

Change-Id: I350950095f840f63638175b09ed083109aada2da
Closes-Bug: #1585092


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1585092

Title:
  video capture of failed integration tests

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  We need to capture video of failed integration tests to detect reasons
  of failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1585092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590316] [NEW] exception handling of callback failures doesn't take into account retriable errors

2016-06-08 Thread Kevin Benton
Public bug reported:

Subscribers to callback events can perform DB operations that may
encounter deadlocks or other DB errors that should be retried after
restarting the entire transaction. However, in the cases where we catch
exceptions.CallbackFailure and then raise a different exception, the DB
retry wrapper cannot recognize that it is retriable failure and will
make it fatal for the request. This can lead to a user getting a
SubnetInUse or something similar because of something completely
unrelated to the actual validation.

** Affects: neutron
 Importance: Medium
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590316

Title:
  exception handling of callback failures doesn't take into account
  retriable errors

Status in neutron:
  New

Bug description:
  Subscribers to callback events can perform DB operations that may
  encounter deadlocks or other DB errors that should be retried after
  restarting the entire transaction. However, in the cases where we
  catch exceptions.CallbackFailure and then raise a different exception,
  the DB retry wrapper cannot recognize that it is retriable failure and
  will make it fatal for the request. This can lead to a user getting a
  SubnetInUse or something similar because of something completely
  unrelated to the actual validation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590310] [NEW] Reduce excessive LBaaSv2 logging

2016-06-08 Thread Harald Jensås
Public bug reported:

Description of problem:
When a tenant creates a v2 HAProxy load balancer without a listener, minimally

$ neutron lbaas-loadbalancer-create 

below kind of messages are then logged on every 10 seconds in lbaas-
agent.log for each such load balancer of any tenant:

2016-04-07 17:34:18.763 25200 WARNING
neutron_lbaas.drivers.haproxy.namespace_driver [-] Stats socket not
found for loadbalancer c86c8dd7-74a1-435f-ad60-f47d2006cb06

At minimum, the level and frequency of these messages should be
adjusted.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590310

Title:
  Reduce excessive LBaaSv2 logging

Status in neutron:
  New

Bug description:
  Description of problem:
  When a tenant creates a v2 HAProxy load balancer without a listener, minimally

  $ neutron lbaas-loadbalancer-create 

  below kind of messages are then logged on every 10 seconds in lbaas-
  agent.log for each such load balancer of any tenant:

  2016-04-07 17:34:18.763 25200 WARNING
  neutron_lbaas.drivers.haproxy.namespace_driver [-] Stats socket not
  found for loadbalancer c86c8dd7-74a1-435f-ad60-f47d2006cb06

  At minimum, the level and frequency of these messages should be
  adjusted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590298] [NEW] DB retry wrapper needs to look for savepoint errors

2016-06-08 Thread Kevin Benton
Public bug reported:

If mysql triggers a deadlock error while in a nested transaction, the
savepoint can be lost, which will cause a DBError from sqlalchemy that
looks like the following:


2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
[req-287a245f-f5da-4126-9625-148e889b3443 tempest-NetworksTestDHCPv6-2134889417 
-] DBAPIError exception wrapped from (pymysql.err.InternalError) (1305, 
u'SAVEPOINT sa_savepoint_1 does not exist') [SQL: u'ROLLBACK TO SAVEPOINT 
sa_savepoint_1']
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters context)
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 161, in 
execute
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 317, in _query
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 835, in 
query
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1019, in 
_read_query_result
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1302, in 
read
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 981, in 
_read_packet
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 393, in 
check_error
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 120, in 
raise_mysql_exception
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
_check_mysql_exception(errinfo)
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 115, in 
_check_mysql_exception
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters raise 
InternalError(errno, errorvalue)
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
InternalError: (1305, u'SAVEPOINT sa_savepoint_1 does not exist')
2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py:68: 
SAWarning: An exception has occurred during handling of a previous exception.  
The previous exception is:
  (pymysql.err.InternalError) (1213, 
u'Deadlock found when trying to get lock; try restarting transaction') [SQL: 
u'INSERT INTO ipallocations (port_id, ip_address, subnet_id, network_id) VALUES 
(%(port_id)s, %(ip_address)s, %(subnet_id)s, %(network_id)s)'] [parameters: 
{'network_id': u'bab3d364-8fff-43dd-ac4b-28baf51060e2', 'subnet_id': 
'0e4d20e8-9f7a-420c-947c-913e8fca99b1', 'port_id': 
u'856f7fe5-65b4-45f3-a3ea-e99bc2e9c1cb', 'ip_address': 
'2003::f816:3eff:fe5b:1efb'}]


This is a known issue with how mysql handles the savepoints on errors: 
https://bitbucket.org/zzzeek/sqlalchemy/issues/2696/misleading-exception-triggered-on


So we need to look for this particular exception in our retry wrapper since the 
real cause is likely a deadlock error.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering 

[Yahoo-eng-team] [Bug 1590117] Re: Service plugin class' get_plugin_type should be a classmethod

2016-06-08 Thread YAMAMOTO Takashi
https://review.openstack.org/#/c/326867/

** Also affects: networking-midonet
   Importance: Undecided
   Status: New

** Changed in: networking-midonet
   Importance: Undecided => Low

** Changed in: networking-midonet
   Status: New => In Progress

** Changed in: networking-midonet
Milestone: None => 2.0.0

** Changed in: networking-midonet
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590117

Title:
  Service plugin class' get_plugin_type should be a classmethod

Status in networking-midonet:
  In Progress
Status in neutron:
  Fix Released

Bug description:
  There isn't any reason to have it as an instance method as its only
  returning a constant.

  $ git grep 'def get_plugin_type('
  neutron/extensions/metering.py:def get_plugin_type(self):
  neutron/extensions/qos.py:def get_plugin_type(self):
  neutron/extensions/segment.py:def get_plugin_type(self):
  neutron/extensions/tag.py:def get_plugin_type(self):
  neutron/services/auto_allocate/plugin.py:def get_plugin_type(self):
  neutron/services/flavors/flavors_plugin.py:def get_plugin_type(self):
  neutron/services/l3_router/l3_router_plugin.py:def get_plugin_type(self):
  neutron/services/network_ip_availability/plugin.py:def 
get_plugin_type(self):
  neutron/services/service_base.py:def get_plugin_type(self):
  neutron/services/timestamp/timestamp_plugin.py:def get_plugin_type(self):
  neutron/tests/functional/pecan_wsgi/utils.py:def get_plugin_type(self):
  neutron/tests/unit/api/test_extensions.py:def 
get_plugin_type(self):
  neutron/tests/unit/api/test_extensions.py:def 
get_plugin_type(self):
  neutron/tests/unit/dummy_plugin.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_flavors.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_l3.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_router_availability_zone.py:def 
get_plugin_type(self):
  neutron/tests/unit/extensions/test_segment.py:def get_plugin_type(self):

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1590117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590117] Re: Service plugin class' get_plugin_type should be a classmethod

2016-06-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326716
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c40403eee0e4775f7ed089c91d4e2e075818a179
Submitter: Jenkins
Branch:master

commit c40403eee0e4775f7ed089c91d4e2e075818a179
Author: Brandon Logan 
Date:   Tue Jun 7 14:29:36 2016 -0500

Make service plugins' get_plugin_type classmethods

Any service plugin that implements the get_plugin_type method should
make it a classmethod.  It should not need to be instantiated to
retrieve the simple constant it returns in every case.

Change-Id: Ia3a1237a5e07169ebc9378b1cd4188085e20d71c
Closes-Bug: #1590117


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590117

Title:
  Service plugin class' get_plugin_type should be a classmethod

Status in networking-midonet:
  New
Status in neutron:
  Fix Released

Bug description:
  There isn't any reason to have it as an instance method as its only
  returning a constant.

  $ git grep 'def get_plugin_type('
  neutron/extensions/metering.py:def get_plugin_type(self):
  neutron/extensions/qos.py:def get_plugin_type(self):
  neutron/extensions/segment.py:def get_plugin_type(self):
  neutron/extensions/tag.py:def get_plugin_type(self):
  neutron/services/auto_allocate/plugin.py:def get_plugin_type(self):
  neutron/services/flavors/flavors_plugin.py:def get_plugin_type(self):
  neutron/services/l3_router/l3_router_plugin.py:def get_plugin_type(self):
  neutron/services/network_ip_availability/plugin.py:def 
get_plugin_type(self):
  neutron/services/service_base.py:def get_plugin_type(self):
  neutron/services/timestamp/timestamp_plugin.py:def get_plugin_type(self):
  neutron/tests/functional/pecan_wsgi/utils.py:def get_plugin_type(self):
  neutron/tests/unit/api/test_extensions.py:def 
get_plugin_type(self):
  neutron/tests/unit/api/test_extensions.py:def 
get_plugin_type(self):
  neutron/tests/unit/dummy_plugin.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_flavors.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_l3.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_router_availability_zone.py:def 
get_plugin_type(self):
  neutron/tests/unit/extensions/test_segment.py:def get_plugin_type(self):

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1590117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp