[Yahoo-eng-team] [Bug 1518200] [NEW] instance not be destroyed after evacuate

2015-11-19 Thread lvdongbing
Public bug reported:

After evacuate an instance to a new host successfully, then start the old 
host's nova-compute, bug the old instance not be destroyed as expected.
See following code:
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/manager.py#L817
nova-compute read migration record from db to get the evacuated instance and 
then destroy it.  It filters migration with status 'accepted'. 
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/manager.py#L2715
After successfully evacuate instance, status of migration will change to 'done' 
from 'accepted' 
So, I think we should modify 'accepted' to 'done' when filter migration record.

** Affects: nova
 Importance: Undecided
 Assignee: lvdongbing (dbcocle)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => lvdongbing (dbcocle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518200

Title:
  instance not be destroyed after evacuate

Status in OpenStack Compute (nova):
  New

Bug description:
  After evacuate an instance to a new host successfully, then start the old 
host's nova-compute, bug the old instance not be destroyed as expected.
  See following code:
  
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/manager.py#L817
  nova-compute read migration record from db to get the evacuated instance and 
then destroy it.  It filters migration with status 'accepted'. 
  
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/manager.py#L2715
  After successfully evacuate instance, status of migration will change to 
'done' from 'accepted' 
  So, I think we should modify 'accepted' to 'done' when filter migration 
record.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1518200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447322] Re: Attempting to reactivate a queued image returns a 403

2015-11-19 Thread Abhishek Kekane
As per discussion in glance meeting [1] this bug should be marked as
Invalid.

[1]
http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-11-19-14.01.log.html

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447322

Title:
  Attempting to reactivate a queued image returns a 403

Status in Glance:
  Invalid

Bug description:
  Overview:
  When attempting to reactivate a queued image (one without an image file) 
returns a "'403 Forbidden - Not allowed to reactivate image in status 'queued'".

  Steps to reproduce:
  1) Register a new image as user
  2) Without uploading an image file, reactivate the image as admin via:
  POST /images//actions/reactivate
  3) Notice that a "'403 Forbidden - Not allowed to reactivate image in status 
'queued'" is returned

  Expected:
  Currently a 400 response should be returned with the same message per spec, 
but this is expected to be changed to a 409

  Actual:
  A 403 response is returned

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518178] [NEW] Replace neutron-specific LengthStrOpt with oslo.config StrOpt max_length option

2015-11-19 Thread Akihiro Motoki
Public bug reported:

max_length option was added to StrOpt in oslo.config 2.7.0.
Neutron-specific LengthStrOpt in neutron/agent/common/config.py can be replaced 
with StrOpt now.

** Affects: neutron
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New

** Changed in: neutron
   Importance: Medium => Low

** Summary changed:

- Replace neutron-specific LengthStrOpt with oslo.config StrOpt max_length 
optionn
+ Replace neutron-specific LengthStrOpt with oslo.config StrOpt max_length 
option

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518178

Title:
  Replace neutron-specific LengthStrOpt with oslo.config StrOpt
  max_length option

Status in neutron:
  New

Bug description:
  max_length option was added to StrOpt in oslo.config 2.7.0.
  Neutron-specific LengthStrOpt in neutron/agent/common/config.py can be 
replaced with StrOpt now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241027] Re: Intermitent Selenium unit test timout error

2015-11-19 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1241027

Title:
  Intermitent Selenium unit test timout error

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  I have the following error *SOMETIMES* (eg: sometimes it does work,
  sometimes it doesn't):

  This is surprising, because the python-selenium, which is non-free,
  isn't installed in my environment, and we were supposed to have a
  patch to not use it if it was detected it wasn't there.

  Since there's a 2 seconds timeout, probably it happens when my server
  is busy. I would suggest to first try increasing this timeout to
  something like 5 seconds or something similar...

  ERROR: test suite for 
  --
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 227, in run
  self.tearDown()
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 350, in
  tearDown
  self.teardownContext(ancestor)
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 366, in
  teardownContext
  try_run(context, names)
File "/usr/lib/python2.7/dist-packages/nose/util.py", line 469, in try_run
  return func()
File
  
"/home/zigo/sources/openstack/havana/horizon/build-area/horizon-2013.2~rc3/horizon/test/helpers.py",
  line 179, in tearDownClass
  super(SeleniumTestCase, cls).tearDownClass()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1170, in tearDownClass
  cls.server_thread.join()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1094, in join
  self.httpd.shutdown()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  984, in shutdown
  "Failed to shutdown the live test server in 2 seconds. The "
  RuntimeError: Failed to shutdown the live test server in 2 seconds. The
  server might be stuck or generating a slow response.

  The same way, there's this one, which must be related (or shall I say,
  due to the previous error?):

  ERROR: test suite for 
  --
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 208, in run
  self.setUp()
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 291, in setUp
  self.setupContext(ancestor)
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 314, in
  setupContext
  try_run(context, names)
File "/usr/lib/python2.7/dist-packages/nose/util.py", line 469, in try_run
  return func()
File
  
"/home/zigo/sources/openstack/havana/horizon/build-area/horizon-2013.2~rc3/horizon/test/helpers.py",
  line 173, in setUpClass
  super(SeleniumTestCase, cls).setUpClass()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1160, in setUpClass
  raise cls.server_thread.error
  WSGIServerException: [Errno 98] Address already in use

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1241027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476264] Re: Cannot delete resources in remote services once project is deleted

2015-11-19 Thread Adam Young
This is not a problem with current policy/approach. The approach to fix
968696 will also ensure this continues to work.

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1476264

Title:
  Cannot delete resources in remote services once project is deleted

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Steps to reproduce:

  Create project

  Assign non-admin role to user

  As non-admin user Go to Glance and create image

  As admin Delete project

  As non-admin, cannot delete image

  If policy requires as scoped token, even admin cannot delete the
  image.

  This has the effect of forcing "admin somewhere is admin everywhere"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1476264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282676] Re: Error 500 when trying to set empty description with LDAP

2015-11-19 Thread Adam Young
We have depreaced the LDAP project back end.  Even for identity, we are
focusing on Read-Only, and not Read Write.  Please reopen if this is
still an issue.

** Changed in: keystone
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1282676

Title:
  Error 500 when trying to set empty description with LDAP

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  When trying to update the project description with an empty string,
  Keystone answers with an error 500. I'm using Devstack set up with the
  LDAP backend (including assignment) and unfortunately, I'm not
  familiar enough with LDAP to determine if the problem might be in the
  configuration elsewhere.

  The issue is particularly noticeable when using Horizon because when
  trying to e.g. assign a user to a project, all the project-related
  fields are also updated.

  How to reproduce:
  1. Get a valid token: openstack --os-identity-api-version 3 token-create
  2. Try to update an existing project by setting the description to "":

   curl -i -X PATCH 
http://192.168.100.219:35357/v3/projects/2b3f7fa5eadb4ee2bef569fee399efe4 -H 
"X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"project": 
{"description": ""}}'
  HTTP/1.1 500 Internal Server Error
  Vary: X-Auth-Token
  Content-Type: application/json
  Content-Length: 222
  Date: Thu, 20 Feb 2014 16:11:25 GMT

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request. {'info': 'description: value #0 invalid per
  syntax', 'desc': 'Invalid syntax'}", "code": 500, "title": "Internal
  Server Error"}}

  
  Keystone logs:

  2014-02-20 15:55:05.121 DEBUG keystone.common.ldap.core [-] LDAP bind: 
dn=cn=Manager,dc=openstack,dc=org from (pid=9341) simple_bind_s 
/opt/stack/keystone/keystone/common/ld
  ap/core.py:555
  2014-02-20 15:55:05.125 DEBUG keystone.common.ldap.core [-] LDAP modify: 
dn=cn=2b3f7fa5eadb4ee2bef569fee399efe4,ou=Projects,dc=openstack,dc=org, 
modlist=[(0, 'description', 
  [''])] from (pid=9341) modify_s 
/opt/stack/keystone/keystone/common/ldap/core.py:650
  2014-02-20 15:55:05.126 DEBUG keystone.common.ldap.core [-] LDAP unbind from 
(pid=9341) unbind_s /opt/stack/keystone/keystone/common/ldap/core.py:559
  2014-02-20 15:55:05.126 DEBUG keystone.common.ldap.core [-] LDAP unbind from 
(pid=9341) unbind_s /opt/stack/keystone/keystone/common/ldap/core.py:559
  2014-02-20 15:55:05.126 ERROR keystone.common.wsgi [-] {'info': 'description: 
value #0 invalid per syntax', 'desc': 'Invalid syntax'}
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi Traceback (most recent 
call last):
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 211, in __call__
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi result = 
method(context, **params)
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/controller.py", line 131, in inner
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi return f(self, 
context, *args, **kwargs)
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/assignment/controllers.py", line 414, in 
update_project
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi ref = 
self.assignment_api.update_project(project_id, project)
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/notifications.py", line 73, in wrapper
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi result = f(*args, 
**kwargs)
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/assignment/core.py", line 97, in update_project
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi ret = 
self.driver.update_project(tenant_id, tenant)
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/assignment/backends/ldap.py", line 83, in 
update_project
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi return 
self._set_default_domain(self.project.update(tenant_id, tenant))
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/assignment/backends/ldap.py", line 488, in update
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi return 
super(ProjectApi, self).update(project_id, values, old_obj)
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/ldap/core.py", line 784, in update
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi object_id, values, 
old_obj)
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/ldap/core.py", line 453, in update
  2014-02-20 15:55:05.126 TRACE keystone.common.wsgi 
conn.modify_s(self._id_to_dn(object_id), modlist)
  2014-02-20 15:55:

[Yahoo-eng-team] [Bug 1425174] Re: explicit unscoped token request does not match spec

2015-11-19 Thread Adam Young
Was fixed in commit

98732367e384b89c9ff9dd632be870e774083b94

** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1425174

Title:
  explicit unscoped token request does not match spec

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Spec states:

  http://git.openstack.org/cgit/openstack/keystone-specs/tree/api/v3
  /identity-api-v3.rst#n1779

  A user may explicitly request an unscoped token by setting
  the "scope" value of the token request to the string "unscoped."

  
  However the code actaully tests:  

  scope_data['unscoped'] = {}

  which generates a dictionary, not a string.

  In this case, the spec should change to match the code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1425174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517146] Re: RFE: Linux Bridge vhost-user support via fast path detection

2015-11-19 Thread Armando Migliaccio
The way I understand this, is that you'd want the linuxbridge agent to
supported a custom environment where you run proprietary technology.
There can be similarities where, right now, you are making small changes
to the upstream LB agent to allow your solution to work with the
upstream tools (without forking). But this strategy is a slippery slope
and it gives you the least flexibility because later on you might
realize that more is needed and more changes have to go upstream...when
is this going to end, or where do we draw the line?

For this reason, I think is dangerous contemplating the idea that an
open source tool designed with certain assumptions in mind (stock
upstream vanilla components) can work with proprietary (accelerated)
components.

Right now, I believe this is a distraction to all the other initiatives
that revolve around LB and OVS, and for this reason I am against it.


** Changed in: neutron
   Status: In Progress => Won't Fix

** Changed in: neutron
 Assignee: Maxime Leroy (maxime-leroy) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517146

Title:
  RFE: Linux Bridge vhost-user support via fast path detection

Status in networking-6wind:
  New
Status in neutron:
  Won't Fix

Bug description:
  Fast path technology is a user-space stack for high performance
  packets offloading the Linux networking functions: IPv4/IPv6 routing,
  linux bridge, iptables, conntrack...

  To benefit from the offloading, a VM needs to use vhost-user instead
  of vhost-net backend for virtio interfaces.

  As a consequence, we have:

  - a forked ML2 linux bridge mechanism driver: 
https://github.com/openstack/networking-6wind/blob/master/networking_6wind/ml2_drivers/linuxbridge/mech_driver/mech_lb_fp.py
  - a forked linux agent: 
https://github.com/openstack/networking-6wind/blob/master/networking_6wind/ml2_drivers/linuxbridge/agent/lb_fp_neutron_agent.py.

  Problem Description
  ===

  We need to maintain a fork version of the ML2 linux bridge mechanism
  driver and also one for the linux bridge agent , it is not a proper
  design.

  From users point of view, having to install a specific mechanism
  driver and an agent to benefit of the fastpath offloading, it is an
  extra operation. We  should avoid to add extra operation for easiness.

  Proposed Change
  ===

  LinuxBridge Agent
  -

  The linux bridge agent shall detect whether a fast path offload is
  enabled, and it shall report it to the mechanism driver with the use
  of agent_state.

  For this capability, a new field will be added in
  agent_state.configuration: fp_offload.

  ML2 LinuxBridge
  ---

  The mechanism driver LinuxBridge in the try_bind_segment_for_agent
  will:

  - set the vif_type to LINUXBRIDGE if agent['configuration']['fp_offload'] is 
False
  - set the vif_type to VHOSTUSER if agent['configuration']['fp_offload'] is 
True

  Specific vif_details needs to be added for vhost-user:

  - vhost_user_socket (i.e '/tmp/usv-')
  - vhost_user_fp_plug should be set True to create a tap netdevice with a 
vhostuser socket

  Note: The vhost_user_fp_plug is a modification under review in Nova.
  See: https://review.openstack.org/#/c/245369/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-6wind/+bug/1517146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518141] [NEW] Some special characters used in display_name cause issues.

2015-11-19 Thread Zbynek Nop
Public bug reported:

One of our Asian users created new instance with roman numeral 2 in its name. 
Looks similar to "II" but it is one character.
This has caused us issues with CLI trying to list instances "nova list" etc.. 
did not work. Just a small one nothing major, renamed the instance and all 
working fine.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "numeral.png"
   
https://bugs.launchpad.net/bugs/1518141/+attachment/4522333/+files/numeral.png

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518141

Title:
  Some special characters used in display_name cause issues.

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  One of our Asian users created new instance with roman numeral 2 in its name. 
Looks similar to "II" but it is one character.
  This has caused us issues with CLI trying to list instances "nova list" etc.. 
did not work. Just a small one nothing major, renamed the instance and all 
working fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1518141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518134] [NEW] Allow reassociate for associated FIPs

2015-11-19 Thread Brad Pokorny
Public bug reported:

The Compute -> Access & Security -> Floating IPs tab should provide the
Associate button for FIPs that are already associated.  The nova CLI
allows a user to associate an already associated FIP, with the behavior
being that the FIP is transferred from one VM to another.

$ openstack --debug ip floating add [Already associated IP] [VM to reassign the 
FIP to]

REQ: curl -g -i -X POST https://nova-api.example.com/v2/[project 
ID]/servers/[Instance ID]/action -H "User-Agent: python-novaclient" -H 
"Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1} [token hash]" -d '{"addFloatingIp": {"address": "[IP 
address]"}}'
"POST /v2/[project ID]/servers/[Instance ID]/action HTTP/1.1" 202 0



The single API call to nova associates the IP with the new VM and disassociates 
it from the old VM.  The CLI behavior is especially useful for service IP's, 
where switching an IP from one VM to another quickly is important.

** Affects: horizon
 Importance: Undecided
 Assignee: Brad Pokorny (bpokorny)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Brad Pokorny (bpokorny)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1518134

Title:
  Allow reassociate for associated FIPs

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Compute -> Access & Security -> Floating IPs tab should provide
  the Associate button for FIPs that are already associated.  The nova
  CLI allows a user to associate an already associated FIP, with the
  behavior being that the FIP is transferred from one VM to another.

  $ openstack --debug ip floating add [Already associated IP] [VM to reassign 
the FIP to]
  
  REQ: curl -g -i -X POST https://nova-api.example.com/v2/[project 
ID]/servers/[Instance ID]/action -H "User-Agent: python-novaclient" -H 
"Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1} [token hash]" -d '{"addFloatingIp": {"address": "[IP 
address]"}}'
  "POST /v2/[project ID]/servers/[Instance ID]/action HTTP/1.1" 202 0
  

  
  The single API call to nova associates the IP with the new VM and 
disassociates it from the old VM.  The CLI behavior is especially useful for 
service IP's, where switching an IP from one VM to another quickly is important.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1518134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518133] [NEW] Horizon page-header Margin needs to be smaller

2015-11-19 Thread Diana Whitten
Public bug reported:

The Bootstrap page-header has a giant top margin, which we remove for
the 'default' theme.  However, when you use a diffierent theme, its
GIANT!  We should make that style global instead of specific to
'default'.

** Affects: horizon
 Importance: Undecided
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Diana Whitten (hurgleburgler)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1518133

Title:
  Horizon page-header Margin needs to be smaller

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The Bootstrap page-header has a giant top margin, which we remove for
  the 'default' theme.  However, when you use a diffierent theme, its
  GIANT!  We should make that style global instead of specific to
  'default'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1518133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518123] [NEW] Horizon navbar-brand should not contain top and bottom padding

2015-11-19 Thread Diana Whitten
Public bug reported:

When using a theme other than 'default', a custom logo that is too big
won't be sized to fit inside of the header exactly perfectly.  The
'default' theme does this.  The style just needs to be moved outside of
'default' theme to be global.

** Affects: horizon
 Importance: Undecided
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Diana Whitten (hurgleburgler)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1518123

Title:
  Horizon navbar-brand should not contain top and bottom padding

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When using a theme other than 'default', a custom logo that is too big
  won't be sized to fit inside of the header exactly perfectly.  The
  'default' theme does this.  The style just needs to be moved outside
  of 'default' theme to be global.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1518123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240163] Re: Can't store a PKI token with a large catalog

2015-11-19 Thread Adam Young
Due to a security issue with PKI tokens, we are going to stop supporting
PKI and we will move people on to Fernet as a replacement.  Thus, no new
features will be implemented for PKI tokens

** Changed in: keystone
   Importance: High => Wishlist

** Changed in: keystone
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1240163

Title:
  Can't store a PKI token with a large catalog

Status in OpenStack Identity (keystone):
  Won't Fix
Status in python-keystoneclient:
  In Progress

Bug description:
  It seems that when you have a sufficiently large catalog, hashing of
  the v3 token ID fails, so the token cannot be stored to the DB:

  Basically when the catalog gets sufficiently large, the assumption
  here about impractically large tokens proves bad:

  https://github.com/openstack/keystone/blob/master/keystone/common/cms.py#L108

  So token[:3] != PKI_ANS1_PREFIX, which means we don't hash the ID and
  just return the unhashed token ID, in my case I'm seeingtoken[:3] ==
  MIJ, not MII which is assumed to be prefix the token.

  https://github.com/openstack/keystone/blob/master/keystone/common/cms.py#L174

  This results in an error like this, and a failure to store the token,
  even though it was created OK.

  2013-10-15 18:24:45.671 29796 WARNING keystone.common.wsgi [-] String
  length exceeded.The length of string '' exceeded
  the limit of column id(CHAR(64)).

  From:
  
https://github.com/openstack/keystone/blob/master/keystone/common/sql/core.py#L87

  I hit this issue because I had some duplicate endpoints in my
  environment, but it seems to be a more general problem, which could
  happen anytime you have a sufficiently large number of catalog
  entries.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1240163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453274] Re: libvirt: resume instance with utf-8 name results in UnicodeDecodeError

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453274

Title:
  libvirt: resume instance with utf-8 name results in UnicodeDecodeError

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This bug is very similar to
  https://bugs.launchpad.net/nova/+bug/1388386.

  Resuming a server that has a unicode name after suspending it results
  in:

  2015-05-08 15:22:30.148 4370 INFO nova.compute.manager 
[req-ac919325-aa2d-422c-b679-5f05ecca5d42 
0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 
6dfced8dd0df4d4d98e4a0db60526c8d - - -] [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] Resuming
  2015-05-08 15:22:31.651 4370 ERROR nova.compute.manager 
[req-ac919325-aa2d-422c-b679-5f05ecca5d42 
0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 
6dfced8dd0df4d4d98e4a0db60526c8d - - -] [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] Setting instance vm_state to ERROR
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] Traceback (most recent call last):
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6427, in 
_error_out_instance_on_exception
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] yield
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4371, in 
resume_instance
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] block_device_info)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2234, in 
resume
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] vifs_already_plugged=True)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/powervc_nova/virt/powerkvm/driver.py", line 
2061, in _create_domain_and_network
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] disk_info=disk_info)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4391, in 
_create_domain_and_network
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] power_on=power_on)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4322, in 
_create_domain
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] LOG.error(err)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] six.reraise(self.type_, self.value, 
self.tb)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4305, in 
_create_domain
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] err = _LE('Error defining a domain 
with XML: %s') % xml
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] UnicodeDecodeError: 'ascii' codec can't 
decode byte 0xc3 in position 297: ordinal not in range(128)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]

  The _create_domain() method has the following line:

  err = _LE('Error defining a domain with XML: %s') % xml

  which fails with the UnicodeDecodeError because the xml object has
  utf-8 encoding.  The fix is to wrap the xml object in
  oslo.utils.encodeutils.safe_decode for the error message.

  I'm seeing the issue on Kilo, but it is also likely an issue on Juno
  as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453274/+subscript

[Yahoo-eng-team] [Bug 1518110] [NEW] Launch Instance Wizard - Security Groups Available table count not working

2015-11-19 Thread Cindy Lu
Public bug reported:

Angular Launch Instance Wizard > Security Group Step:

The Available table is acting strangely.  Please take a look at the
Available table in the attached screenshot.

Default Security Group is selected by default, but it is still showing
up in Available table, and also 'No available items' row.  So there are
2 rows.

Also, if I have more than one security group, the Available item count
is incorrect.  If I try to allocate multiple, they don't show up in the
Allocated table.  Opening up browser console shows me these errors:

Duplicates in a repeater are not allowed. Use 'track by' expression to
specify unique keys. Repeater: row in ctrl.tableData.displayedAllocated
track by row.id, Duplicate key: 1, Duplicate value:
{"description":"default","id":1,"name":"default","rules":[],"tenant_id":"485eee44635643f0a60fe38d4e0f9044","security_group_rules":[null]}

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screen Shot 2015-11-19 at 1.52.44 PM.png"
   
https://bugs.launchpad.net/bugs/1518110/+attachment/4522318/+files/Screen%20Shot%202015-11-19%20at%201.52.44%20PM.png

** Summary changed:

- Launch Instance Wizard - Security Groups No Available msg
+ Launch Instance Wizard - Security Groups Available table count not working

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1518110

Title:
  Launch Instance Wizard - Security Groups Available table count not
  working

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Angular Launch Instance Wizard > Security Group Step:

  The Available table is acting strangely.  Please take a look at the
  Available table in the attached screenshot.

  Default Security Group is selected by default, but it is still showing
  up in Available table, and also 'No available items' row.  So there
  are 2 rows.

  Also, if I have more than one security group, the Available item count
  is incorrect.  If I try to allocate multiple, they don't show up in
  the Allocated table.  Opening up browser console shows me these
  errors:

  Duplicates in a repeater are not allowed. Use 'track by' expression to
  specify unique keys. Repeater: row in
  ctrl.tableData.displayedAllocated track by row.id, Duplicate key: 1,
  Duplicate value:
  
{"description":"default","id":1,"name":"default","rules":[],"tenant_id":"485eee44635643f0a60fe38d4e0f9044","security_group_rules":[null]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1518110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293480] Re: Reboot host didn't restart instances due to libvirt lifecycle event change instance's power_stat as shutdown

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293480

Title:
  Reboot host  didn't restart instances due to  libvirt lifecycle event
  change instance's power_stat as shutdown

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  1. Libvirt driver can receive libvirt lifecycle events(registered in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1004),
  then handle it in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L969
  , that means  shutdown a domain  will  send out shutdown lifecycle
  event and nova compute will try to sync the instance's power_state.

  2. When reboot compute service ,  compute service is trying to reboot 
instance which were running before reboot.
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L911.  
Compute service only checks the power_state in database. the value of 
power_state can be changed in 3.  That leads out  reboot host, some instances 
which were running before reboot can't be restarted.

  3. When reboot the host,  the code path like  1)libvirt-guests will
  shutdown all the domain,   2)then sendout  lifecycle event , 3)nova
  compute receive it and 4)save power_state 'shutoff' in db , 5)then try
  to stop it.   Compute service may be killed in any step,  In my test
  enviroment,  two running instances , only one instance was restarted
  succefully. another was set power_state with 'shutoff', task_state
  with 'power off' in  step 4) .  So it can't pass the check in
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L911.
  won't be restarted.

  
  Not sure this is a bug ,  wonder if there is solution for this .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332917] Re: Deadlock when deleting from ipavailabilityranges

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332917

Title:
  Deadlock when deleting from ipavailabilityranges

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  Traceback:
   TRACE neutron.api.v2.resource Traceback (most recent call last):
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 87, in resource
   TRACE neutron.api.v2.resource result = method(request=request, **args)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 477, in delete
   TRACE neutron.api.v2.resource obj_deleter(request.context, id, **kwargs)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 608, in 
delete_subnet
   TRACE neutron.api.v2.resource break
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 463, 
in __exit__
   TRACE neutron.api.v2.resource self.rollback()
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
57, in __exit__
   TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 460, 
in __exit__
   TRACE neutron.api.v2.resource self.commit()
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 370, 
in commit
   TRACE neutron.api.v2.resource self._prepare_impl()
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 350, 
in _prepare_impl
   TRACE neutron.api.v2.resource self.session.flush()
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py", 
line 444, in _wrap
   TRACE neutron.api.v2.resource _raise_if_deadlock_error(e, 
self.bind.dialect.name)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py", 
line 427, in _raise_if_deadlock_error
   TRACE neutron.api.v2.resource raise 
exception.DBDeadlock(operational_error)
   TRACE neutron.api.v2.resource DBDeadlock: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'DELETE 
FROM ipavailabilityranges WHERE ipavailabilityranges.allocation_pool_id = %s 
AND ipavailabilityranges.first_ip = %s AND ipavailabilityranges.last_ip = %s' 
('b19b08b6-90f2-43d6-bfe1-9cbe6e0e1d93', '10.100.0.2', '10.100.0.14')

  http://logs.openstack.org/21/76021/12/check/check-tempest-dsvm-
  neutron-
  full/7577c27/logs/screen-q-svc.txt.gz?level=TRACE#_2014-06-21_18_39_47_122

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305897] Re: Hyper-V driver failing with dynamic memory due to virtual NUMA

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1305897

Title:
  Hyper-V driver failing with dynamic memory due to virtual NUMA

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Starting with Windows Server 2012, Hyper-V provides the Virtual NUMA
  functionality. This option is enabled by default in the VMs depending
  on the underlying hardware.

  However, it's not compatible with dynamic memory. The Hyper-V driver
  is not aware of this constraint and it's not possible to boot new VMs
  if the nova.conf parameter 'dynamic_memory_ratio' > 1.

  The error in the logs looks like the following:
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops HyperVException: 
WMI job failed with status 10. Error details: Failed to modify device 'Memory'.
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops Dynamic memory and 
virtual NUMA cannot be enabled on the same virtual machine. - 
'instance-0001c90c' failed to modify device 'Memory'. (Virtual machine ID 
F4CB4E4D-CA06-4149-9FA3-CAD2E0C6CEDA)
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops
  2014-04-09 16:33:43.615 18600 TRACE nova.virt.hyperv.vmops Dynamic memory and 
virtual NUMA cannot be enabled on the virtual machine 'instance-0001c90c' 
because the features are mutually exclusive. (Virtual machine ID 
F4CB4E4D-CA06-4149-9FA3-CAD2E0C6CEDA) - Error code: 32773

  In order to solve this problem, it's required to change the field
  'VirtualNumaEnabled' in 'Msvm_VirtualSystemSettingData' (option
  available only in v2 namespace) while creating the VM when dynamic
  memory is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1305897/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327218] Re: Volume detach failure because of invalid bdm.connection_info

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327218

Title:
  Volume detach failure because of invalid bdm.connection_info

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Confirmed

Bug description:
  Example of this here:

  http://logs.openstack.org/33/97233/1/check/check-grenade-
  dsvm/f7b8a11/logs/old/screen-n-cpu.txt.gz?level=TRACE#_2014-06-02_14_13_51_125

     File "/opt/stack/old/nova/nova/compute/manager.py", line 4153, in 
_detach_volume
   connection_info = jsonutils.loads(bdm.connection_info)
     File "/opt/stack/old/nova/nova/openstack/common/jsonutils.py", line 164, 
in loads
   return json.loads(s)
     File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
   return _default_decoder.decode(s)
     File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
   obj, end = self.raw_decode(s, idx=_w(s, 0).end())
   TypeError: expected string or buffer

  and this was in grenade with stable/icehouse nova commit 7431cb9

  There's nothing unusual about the test which triggers this - simply
  attaches a volume to an instance, waits for it to show up in the
  instance and then tries to detach it

  logstash query for this:

    message:"Exception during message handling" AND message:"expected
  string or buffer" AND message:"connection_info =
  jsonutils.loads(bdm.connection_info)" AND tags:"screen-n-cpu.txt"

  but it seems to be very rare

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313573] Re: nova backup fails to backup an instance with attached volume (libvirt, LVM backed)

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1313573

Title:
  nova backup fails to backup an instance with attached volume (libvirt,
  LVM backed)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Description of problem:
  An instance has an attached volume, after running the command:
  # nova backup   snapshot  
  An image has been created (type backup) and the status is stuck in 'queued'. 

  Version-Release number of selected component (if applicable):
  openstack-nova-compute-2013.2.3-6.el6ost.noarch
  openstack-nova-conductor-2013.2.3-6.el6ost.noarch
  openstack-nova-novncproxy-2013.2.3-6.el6ost.noarch
  openstack-nova-scheduler-2013.2.3-6.el6ost.noarch
  openstack-nova-api-2013.2.3-6.el6ost.noarch
  openstack-nova-cert-2013.2.3-6.el6ost.noarch

  python-glance-2013.2.3-2.el6ost.noarch
  python-glanceclient-0.12.0-2.el6ost.noarch
  openstack-glance-2013.2.3-2.el6ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. launch an instance from a volume.
  2. backup the instance.

  
  Actual results:
  The backup is stuck in queued state.

  Expected results:
  the backup should be available as an image in Glance.

  Additional info:
  The nova-compute error & the glance logs are attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1313573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1187102] Re: quantum-ns-metadata-proxy listens on external interfaces too

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1187102

Title:
  quantum-ns-metadata-proxy listens on external interfaces too

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in OpenStack Security Advisory:
  Invalid

Bug description:
  Running Grizzy 2013.1 on Ubuntu 12.04. Three nodes: controller,
  network and compute.

  netnode# ip netns exec qrouter-7a44de32-3ac0-4f3e-92cc-1a37d8211db8 netstat 
-anp
  Active Internet connections (servers and established)
  Proto Recv-Q Send-Q Local Address   Foreign Address State 
  PID/Program name
  tcp0  0 0.0.0.0:96970.0.0.0:*   LISTEN
  18462/python

  So this router is uplinked to an external network:

  netnode# ip netns exec qrouter-7a44de32-3ac0-4f3e-92cc-1a37d8211db8 ip -4 a
  14: lo:  mtu 16436 qdisc noqueue state UNKNOWN
  inet 127.0.0.1/8 scope host lo
  23: qr-123f9b7f-43:  mtu 1500 qdisc 
noqueue state UNKNOWN
  inet 172.17.17.1/24 brd 172.17.17.255 scope global qr-123f9b7f-43
  24: qg-c8a6a6cd-6d:  mtu 1500 qdisc 
noqueue state UNKNOWN
  inet 192.168.101.2/24 brd 192.168.101.255 scope global qg-c8a6a6cd-6d

  Now from outside can do:

  $ nmap 192.168.101.2 -p 9697
  Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-03 13:45 IST
  Nmap scan report for 192.168.101.2
  Host is up (0.0018s latency).
  PORT STATE SERVICE
  9697/tcp open  unknown

  As a test I tried changing namespace_proxy.py so it would not bind to
  0.0.0.0

  proxy.start(handler, self.port, host='127.0.0.1')

  but the metadata stopped working. In iptables this rule is being hit:

    -A quantum-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp
  --dport 80 -j REDIRECT --to-ports 9697

  I'm guessing the intention of that rule is also change the destination
  address to 127.0.0.1  ? as there is this:

    -A quantum-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697
  -j ACCEPT

  but the counters show that this rule is not being hit. Anyway the
  default policy for INPUT is ACCEPT.

  From the iptables man page:
    REDIRECT
     "... It redirects the packet to the machine itself by changing the 
destination IP to the primary address  of  the  incoming  interface
     (locally-generated packets are mapped to the 127.0.0.1 address).  ..."

  so the primary address of the incoming interface is 172.17.17.1, not
  127.0.0.1.

  So I manually deleted the "-j REDIRECT --to-ports 9697" and added "-j
  DNAT --to-destination 127.0.0.1:9697" but that didn't work - seems
  like it is not possible: http://serverfault.com/questions/351816/dnat-
  to-127-0-0-1-with-iptables-destination-access-control-for-transparent-
  soc

  So I tried changing the ns proxy to listen on 172.17.17.1. I think
  this is the one and only address it should bind to anyway.

  proxy.start(handler, self.port, host='172.17.17.1') # hardwire as
  a test

  Stopped the l3-agent, killed the quantum-ns-metadata-proxy and
  restarted the l3-agent. But the ns proxy gave an error:

  Stderr: 'cat: /proc/10850/cmdline: No such file or directory\n'
  2013-06-03 15:05:18ERROR [quantum.wsgi] Unable to listen on 
172.17.17.1:9697
  Traceback (most recent call last):
    File "/usr/lib/python2.7/dist-packages/quantum/wsgi.py", line 72, in start
  backlog=backlog)
    File "/usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 38, 
in listen
  sock.bind(addr)
    File "/usr/lib/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
  error: [Errno 99] Cannot assign requested address

  The l3-agent.log shows the agent deleted the port qr-123f9b7f-43 at
  15:05:10 and did not recreate it until 15:05:19 - ie a second too late
  for the ns proxy. From looking at the code it seems the l3-agent
  spawns the ns proxy just before it plugs its ports. I was able to
  start the ns proxy manually with the command line from the l3-agent
  log, and the metadata worked and was not reachable from outside.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1187102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296414] Re: quotas not updated when periodic tasks or startup finish deletes

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296414

Title:
  quotas not updated when periodic tasks or startup finish deletes

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  There are a couple of cases in the compute manager where we don't pass
  reservations to _delete_instance().  For example, one of them is
  cleaning up when we see a delete that is stuck in DELETING.

  The only place we ever update quotas as part of delete should be when
  the instance DB record is removed. If something is stuck in DELETING,
  it means that the quota was not updated.  We should make sure we're
  always updating the quota when the instance DB record is removed.

  Soft delete kinda throws a wrench in this, though, because I think you
  want soft deleted instances to not count against quotas -- yet their
  DB records will still exist. In this case, it seems we may have a race
  condition in _delete_instance() -> _complete_deletion() where if the
  instance somehow was SOFT_DELETED, quotas would have updated twice
  (once in soft_delete and once in _complete_deletion).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288039] Re: live-migration cinder boot volume target_lun id incorrect

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288039

Title:
  live-migration cinder boot volume target_lun id incorrect

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When nova goes to cleanup _post_live_migration on the source host, the
  block_device_mapping has incorrect data.

  I can reproduce this 100% of the time with a cinder iSCSI backend,
  such as 3PAR.

  This is a Fresh install on 2 new servers with no attached storage from Cinder 
and no VMs.
  I create a cinder volume from an image. 
  I create a VM booted from that Cinder volume.  That vm shows up on host1 with 
a LUN id of 0.
  I live migrate that vm.   The vm moves to host 2 and has a LUN id of 0.   The 
LUN on host1 is now gone.

  I create another cinder volume from image.
  I create another VM booted from the 2nd cinder volume.  The vm shows up on 
host1 with a LUN id of 0.  
  I live migrate that vm.  The VM moves to host 2 and has a LUN id of 1.  
  _post_live_migrate is called on host1 to clean up, and gets failures, because 
it's asking cinder to delete the volume
  on host1 with a target_lun id of 1, which doesn't exist.  It's supposed to be 
asking cinder to detach LUN 0.

  First migrate
  HOST2
  2014-03-04 19:02:07.870 WARNING nova.compute.manager 
[req-24521cb1-8719-4bc5-b488-73a4980d7110 admin admin] pre_live_migrate: 
{'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 
'mount_device': u'vda', 'connection_info': {u'd
  river_volume_type': u'iscsi', 'serial': 
u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260'
  , u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': u'virtio', 
'device_type': u'disk', 'delete_on_termination': False}]}
  HOST1
  2014-03-04 19:02:16.775 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi',
   u'serial': u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': 
{u'target_discovered': True, u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}



  Second Migration
  This is in _post_live_migration on the host1.  It calls libvirt's driver.py 
post_live_migration with the volume information returned from the new volume on 
host2, hence the target_lun = 1.   It should be calling libvirt's driver.py to 
clean up the original volume on the source host, which has a target_lun = 0.
  2014-03-04 19:24:51.626 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi', u'serial': 
u'f0087595-804d-4bdb-9bad-0da2166313ea', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 1, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361186] Re: nova service-delete fails for services on non-child (top) cell

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361186

Title:
  nova service-delete fails for services on non-child (top) cell

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Nova service-delete fails for services on non-child (top) cell.

  How to reproduce:

  $ nova --os-username admin service-list

  
++--+-+--+-+---++-+
  | Id | Binary   | Host| Zone | Status 
 | State | Updated_at | Disabled Reason |
  
++--+-+--+-+---++-+
  | region!child@1 | nova-conductor   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:06:56.00 | -   |
  | region!child@2 | nova-compute | region!child@ubuntu | nova | 
enabled | up| 2014-08-18T06:06:55.00 | -   |
  | region!child@3 | nova-cells   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:06:59.00 | -   |
  | region!child@4 | nova-scheduler   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:06:50.00 | -   |
  | region@1   | nova-cells   | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:06:59.00 | -   |
  | region@2   | nova-cert| region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:06:58.00 | -   |
  | region@3   | nova-consoleauth | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:06:57.00 | -   |
  
++--+-+--+-+---++-+

  Stop one of the services on top cell (e.g. nova-cert).

  $ nova --os-username admin service-list

  
++--+-+--+-+---++-+
  | Id | Binary   | Host| Zone | Status 
 | State | Updated_at | Disabled Reason |
  
++--+-+--+-+---++-+
  | region!child@1 | nova-conductor   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:09:26.00 | -   |
  | region!child@2 | nova-compute | region!child@ubuntu | nova | 
enabled | up| 2014-08-18T06:09:25.00 | -   |
  | region!child@3 | nova-cells   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:09:19.00 | -   |
  | region!child@4 | nova-scheduler   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:09:20.00 | -   |
  | region@1   | nova-cells   | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:09:19.00 | -   |
  | region@2   | nova-cert| region@ubuntu   | internal | 
enabled | down  | 2014-08-18T06:08:28.00 | -   |
  | region@3   | nova-consoleauth | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:09:27.00 | -   |
  
++--+-+--+-+---++-+

  Nova service-delete:
  $ nova --os-username admin service-delete 'region@2'

  Check the request id from nova-api.log:

  2014-08-18 15:10:23.491 INFO nova.osapi_compute.wsgi.server [req-
  e134d915-ad66-41ba-a6f8-33ec51b7daee admin demo] 192.168.101.31
  "DELETE /v2/d66804d2e78549cd8f5efcedd0abecb2/os-services/region@2
  HTTP/1.1" status: 204 len: 179 time: 0.1334069

  Error log in n-cell-region service:

  2014-08-18 15:10:23.464 ERROR nova.cells.messaging 
[req-e134d915-ad66-41ba-a6f8-33ec51b7daee admin demo] Error locating next hop 
for message: 'NoneType' object has no attribute 'count'
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging Traceback (most recent 
call last):
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 406, in process
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging next_hop = 
self._get_next_hop()
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 361, in _get_next_hop
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging dest_hops = 
target_cell.count(_PATH_CELL_SEP)
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging AttributeError: 'NoneType' 
object has no attribute 'co

[Yahoo-eng-team] [Bug 1362676] Re: Hyper-V agent doesn't create stateful security group rules

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362676

Title:
  Hyper-V agent doesn't create stateful security group rules

Status in networking-hyperv:
  Fix Released
Status in neutron:
  New
Status in neutron juno series:
  Fix Released

Bug description:
  Hyper-V agent does not create stateful security group rules (ACLs),
  meaning it doesn't allow any response traffic to pass through.

  For example, the following security group rule:
  {"direction": "ingress", "remote_ip_prefix": null, "protocol": "tcp", 
"port_range_max": 22,  "port_range_min": 22, "ethertype": "IPv4"}
  Allows tcp  inbound traffic through port 22, but since the Hyper-V agent does 
not add this rule as stateful, the reply traffic never received, unless 
specifically added an egress security group rule as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1362676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374473] Re: 500 error on router-gateway-set for DVR on second external network

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374473

Title:
  500 error on router-gateway-set for DVR on second external network

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  Under some circumstances this operation may fail.

  Steps to reproduce:

  1) Run Devstack with DVR *on* (devstack by default creates an external 
network and sets the gateway to the router)
  2) Create an external network
  3) Create a router
  4) Set the gateway to the router
  5) Observe the Internal Server Error

  Expected outcome: the gateway is correctly set.

  This occurs with the latest Juno code. The underlying error is an
  attempted double binding of the router to the L3 agent.

  More details in:

  http://paste.openstack.org/show/115614/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367189] Re: multipath not working with Storwize backend if CHAP enabled

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367189

Title:
  multipath not working with Storwize backend if CHAP enabled

Status in Cinder:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in os-brick:
  Fix Released

Bug description:
  if I try to attach a volume to a VM while having multipath enabled in
  nova and CHAP enabled in the storwize backend, it fails:

  2014-09-09 11:37:14.038 22944 ERROR nova.virt.block_device 
[req-f271874a-9720-4779-96a8-01575641a939 a315717e20174b10a39db36b722325d6 
76d25b1928e7407392a69735a894c7fc] [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Driver failed to attach volume 
c460f8b7-0f1d-4657-bdf7-e142ad34a132 at /dev/vdb
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Traceback (most recent call last):
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 239, in 
attach
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] device_type=self['device_type'], 
encryption=encryption)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1235, in 
attach_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] disk_info)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1194, in 
volume_driver_method
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return method(connection_info, *args, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 
249, in inner
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return f(*args, **kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 280, in 
connect_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=[0, 255])[0] \
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 579, in 
_run_iscsiadm_bare
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=check_exit_code)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 165, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return processutils.execute(*cmd, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", line 
193, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] cmd=' '.join(cmd))
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] ProcessExecutionError: Unexpected error 
while running command.
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m discovery -t sendtargets -p 
192.168.1.252:3260
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Exit code: 5
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stdout: ''
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stderr: 'iscsiadm: Connection to 
Discovery Address 192.168.1.252 closed\niscsiadm: Login I/O error, failed to 
receive a PDU\niscsiadm: retrying discovery login to 192.168.1.252\niscsiadm: 
Connection to Discovery Address 192.

[Yahoo-eng-team] [Bug 1376586] Re: pre_live_migration is missing some disk information in case of block migration

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376586

Title:
  pre_live_migration is missing some disk information in case of block
  migration

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  The pre_live_migration API is called with a disk retrieved by a call
  to driver.get_instance_disk_info when doing a block migration.
  Unfortunately block device information is not passed, so Nova is
  calling LibvirtDriver._create_images_and_backing with partial
  disk_info.

  As a result, for example when migrating a volume with a NFS volume
  attached, a useless file is created in the instance directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385295] Re: use_syslog=True does not log to syslog via /dev/log anymore

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385295

Title:
  use_syslog=True does not log to syslog via /dev/log anymore

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in oslo.log:
  Fix Released
Status in cinder package in Ubuntu:
  Invalid
Status in python-oslo.log package in Ubuntu:
  In Progress

Bug description:
  python-oslo.log SRU:
  [Impact]

   * Nova services not able to write log to syslog

  [Test Case]

   * 1. Set use_syslog to True in nova.conf/cinder.conf
 2. stop rsyslog service
 3. restart nova/cinder services
 4. restart rsyslog service
 5. Log is not written to syslog after rsyslog is brought up.

  [Regression Potential]

   * none

  
  Reproduced on:
  https://github.com/openstack-dev/devstack 
514c82030cf04da742d16582a23cc64962fdbda1
  /opt/stack/keystone/keystone.egg-info/PKG-INFO:Version: 2015.1.dev95.g20173b1
  /opt/stack/heat/heat.egg-info/PKG-INFO:Version: 2015.1.dev213.g8354c98
  /opt/stack/glance/glance.egg-info/PKG-INFO:Version: 2015.1.dev88.g6bedcea
  /opt/stack/cinder/cinder.egg-info/PKG-INFO:Version: 2015.1.dev110.gc105259

  How to reproduce:
  Set
   use_syslog=True
   syslog_log_facility=LOG_SYSLOG
  for Openstack config files and restart processes inside their screens

  Expected:
  Openstack logs logged to syslog as well

  Actual:
  Nothing goes to syslog

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378132] Re: Hard-reboots ignore root_device_name

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378132

Title:
  Hard-reboots ignore root_device_name

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Hard-rebooting an instance causes the root_device_name to get
  ignored/reset, which can cause wailing and gnashing of teeth if the
  guest operating system is expecting it to not do that.

  Steps to reproduce:

  1. Stand up a devstack
  2. Load the openrc with admin credentials
  3. glance image-update --property root_device_name=sda SOME_CIRROS_IMAGE
  4. Spawn a cirros instance using the above image. The root filesystem should 
present as being mounted on /dev/sda1, and the libvirt.xml should show the disk 
with a target of "scsi"
  5. Hard-reboot the instance

  Expected Behaviour

  The instance comes back up with the same hardware configuration as it
  had when initially spawned, i.e., with its root filesystem attached to
  a SCSI bus

  Actual Behaviour

  The instance comes back with its root filesystem attached to an IDE
  bus.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382064] Re: Failure to allocate tunnel id when creating networks concurrently

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382064

Title:
  Failure to allocate tunnel id when creating networks concurrently

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  When multiple networks are created concurrently, the following trace
  is observed:

  WARNING neutron.plugins.ml2.drivers.helpers 
[req-34103ce8-b6d0-459b-9707-a24e369cf9de None] Allocate gre segment from pool 
failed after 10 failed attempts
  DEBUG neutron.context [req-2995f877-e3e6-4b32-bdae-da6295e492a1 None] 
Arguments dropped when creating context: {u'project_name': None, u'tenant': 
None} __init__ /usr/lib/python2.7/dist-packages/neutron/context.py:83
  DEBUG neutron.plugins.ml2.drivers.helpers 
[req-3541998d-44df-468f-b65b-36504e893dfb None] Allocate gre segment from pool, 
attempt 1 failed with segment {'gre_id': 300L} 
allocate_partially_specified_segment 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/helpers.py:138
  DEBUG neutron.context [req-6dcfb91d-2c5b-4e4f-9d81-55ba381ad232 None] 
Arguments dropped when creating context: {u'project_name': None, u'tenant': 
None} __init__ /usr/lib/python2.7/dist-packages/neutron/context.py:83
  ERROR neutron.api.v2.resource [req-34103ce8-b6d0-459b-9707-a24e369cf9de None] 
create failed
  TRACE neutron.api.v2.resource Traceback (most recent call last):
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  TRACE neutron.api.v2.resource result = method(request=request, **args)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 448, in create
  TRACE neutron.api.v2.resource obj = obj_creator(request.context, **kwargs)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 497, in 
create_network
  TRACE neutron.api.v2.resource tenant_id)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 160, 
in create_network_segments
  TRACE neutron.api.v2.resource segment = self.allocate_tenant_segment(session)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 189, 
in allocate_tenant_segment
  TRACE neutron.api.v2.resource segment = 
driver.obj.allocate_tenant_segment(session)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/type_tunnel.py", 
line 115, in allocate_tenant_segment
  TRACE neutron.api.v2.resource alloc = 
self.allocate_partially_specified_segment(session)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/helpers.py", line 
143, in allocate_partially_specified_segment
  TRACE neutron.api.v2.resource raise 
exc.NoNetworkFoundInMaximumAllowedAttempts()
  TRACE neutron.api.v2.resource NoNetworkFoundInMaximumAllowedAttempts: Unable 
to create the network. No available network found in maximum allowed attempts.
  TRACE neutron.api.v2.resource

  Additional conditions: multiserver deployment and mysql.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387543] Re: [OSSA 2015-015] Resize/delete combo allows to overload nova-compute (CVE-2015-3241)

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387543

Title:
  [OSSA 2015-015] Resize/delete combo allows to overload nova-compute
  (CVE-2015-3241)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  If user create instance, and resize it to larger flavor and than
  delete that instance, migration process does not stop. This allow
  user to repeat operation many times, causing overload to affected
  compute nodes over user quota.

  Affected installation: most drastic effect happens on 'raw-disk'
  instances without live migration. Whole raw disk (full size of the
  flavor) is copied during migration.

  If user delete instance it does not terminate rsync/scp keeping disk
  backing file opened regardless of removal by nova compute.

  Because rsync/scp of large disks is rather slow, it gives malicious
  user enough time to repeat that operation few hundred times, causing
  disk space depletion on compute nodes, huge impact on management
  network and so on.

  Proposed solution: abort migration (kill rsync/scp) as soon, as
  instance is deleted.

  Affected installation: Havana, Icehouse, probably Juno (not tested).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381468] Re: Type conflict in nova/nova/scheduler/filters/trusted_filter.py using attestation_port default value

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381468

Title:
  Type conflict in nova/nova/scheduler/filters/trusted_filter.py using
  attestation_port default value

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When trusted filter in nova scheduler is running with default value of
  attestation_port:

  cfg.StrOpt('attestation_port', default='8443', help='Attestation
  server port'),

  method _do_request() in AttestationService class has this line:

  action_url = "https://%s:%d%s/%s"; % (self.host, self.port,
  self.api_url, action_url)

  It is easy to see that default type of attestation_port is string. 
  But in action_url self.port is required as integer (%d). It leads to conflict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379212] Re: Attaching volume to iso instance is failure because of duplicate device name 'hda'.

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1379212

Title:
  Attaching volume to iso instance is failure because of duplicate
  device name 'hda'.

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  I try to attach a volume to iso instance, return code of volume-attach
  api is 200 ok, but the volume can't be attached to instance in fact,
  there are some error messages in nova-compute.log like this
  'libvirtError: Requested operation is not valid: target hda already
  exists'.

  The root device of iso instance is hda, nova-compute should not assign
  hda to cinder volume again.

  The following is reproduce steps:

  1. boot instance from iso image.
  2. create a cinder volume.
  3. try to attach the volume to iso instance.

  Attaching volume is failed, I can find libvirt error in nova-
  compute.log.

  http://paste.openstack.org/show/105144/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1379212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392527] Re: [OSSA 2015-017] Deleting instance while resize instance is running leads to unuseable compute nodes (CVE-2015-3280)

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392527

Title:
  [OSSA 2015-017] Deleting instance while resize instance is running
  leads to unuseable compute nodes (CVE-2015-3280)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  Steps to reproduce:
  1) Create a new instance,waiting until it’s status goes to ACTIVE state
  2) Call resize API
  3) Delete the instance immediately after the task_state is “resize_migrated” 
or vm_state is “resized”
  4) Repeat 1 through 3 in a loop

  I have kept attached program running for 4 hours, all instances
  created are deleted (nova list returns empty list) but I noticed
  instances directories with the name “_resize> are not
  deleted from the instance path of the compute nodes (mainly from the
  source compute nodes where the instance was running before resize). If
  I keep this program running for couple of more hours (depending on the
  number of compute nodes), then it completely uses the entire disk of
  the compute nodes (based on the disk_allocation_ratio parameter
  value). Later, nova scheduler doesn’t select these compute nodes for
  launching new vms and starts reporting error "No valid hosts found".

  Note: Even the periodic tasks doesn't cleanup these orphan instance
  directories from the instance path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1392527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383345] Re: PCI-Passthrough : TypeError: pop() takes at most 1 argument (2 given

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383345

Title:
  PCI-Passthrough : TypeError: pop() takes at most 1 argument (2 given

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Setting the below causes nova to fail.

  # White list of PCI devices available to VMs. For example:
  # pci_passthrough_whitelist =  [{"vendor_id": "8086",
  # "product_id": "0443"}] (multi valued)
  #pci_passthrough_whitelist=
  pci_passthrough_whitelist=[{"vendor_id":"8086","product_id":"10fb"}]

  Fails with :
  CRITICAL nova [-] TypeError: pop() takes at most 1 argument (2 given) 
  2014-10-17 15:28:59.968 7153 CRITICAL nova [-] TypeError: pop() takes at most 
1 argument (2 given)
  2014-10-17 15:28:59.968 7153 TRACE nova Traceback (most recent call last):
  2014-10-17 15:28:59.968 7153 TRACE nova   File "/usr/bin/nova-compute", line 
10, in 
  2014-10-17 15:28:59.968 7153 TRACE nova sys.exit(main())
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/compute.py", line 72, in main
  2014-10-17 15:28:59.968 7153 TRACE nova 
db_allowed=CONF.conductor.use_local)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 275, in create
  2014-10-17 15:28:59.968 7153 TRACE nova db_allowed=db_allowed)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 148, in __init__
  2014-10-17 15:28:59.968 7153 TRACE nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 631, in 
__init__
  2014-10-17 15:28:59.968 7153 TRACE nova self.driver = 
driver.load_compute_driver(self.virtapi, compute_driver)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/virt/driver.py", line 1402, in 
load_compute_driver
  2014-10-17 15:28:59.968 7153 TRACE nova virtapi)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/importutils.py", line 
50, in import_object_ns
  2014-10-17 15:28:59.968 7153 TRACE nova return 
import_class(import_value)(*args, **kwargs)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 405, in 
__init__
  2014-10-17 15:28:59.968 7153 TRACE nova self.dev_filter = 
pci_whitelist.get_pci_devices_filter()
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/pci/pci_whitelist.py", line 88, in 
get_pci_devices_filter
  2014-10-17 15:28:59.968 7153 TRACE nova return 
PciHostDevicesWhiteList(CONF.pci_passthrough_whitelist)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/pci/pci_whitelist.py", line 68, in 
__init__
  2014-10-17 15:28:59.968 7153 TRACE nova self.specs = 
self._parse_white_list_from_config(whitelist_spec)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/pci/pci_whitelist.py", line 49, in 
_parse_white_list_from_config
  2014-10-17 15:28:59.968 7153 TRACE nova spec = 
pci_devspec.PciDeviceSpec(jsonspec)
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/pci/pci_devspec.py", line 132, in 
__init__
  2014-10-17 15:28:59.968 7153 TRACE nova self._init_dev_details()
  2014-10-17 15:28:59.968 7153 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/pci/pci_devspec.py", line 137, in 
_init_dev_details
  2014-10-17 15:28:59.968 7153 TRACE nova self.vendor_id = 
details.pop("vendor_id", ANY)

  Changing the config to:
  pci_passthrough_whitelist={"vendor_id":"8086","product_id":"10fb"}

  Fixes the above.

  In Icehouse, PCI Passthrough worked with passing a list, in Juno it is
  broken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1383345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398999] Re: Block migrate with attached volumes copies volumes to themselves

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398999

Title:
  Block migrate with attached volumes copies volumes to themselves

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in libvirt package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Triaged
Status in libvirt source package in Trusty:
  Confirmed
Status in nova source package in Trusty:
  Triaged
Status in libvirt source package in Utopic:
  Won't Fix
Status in nova source package in Utopic:
  Won't Fix
Status in libvirt source package in Vivid:
  Confirmed
Status in nova source package in Vivid:
  Triaged
Status in libvirt source package in Wily:
  Fix Released
Status in nova source package in Wily:
  Triaged

Bug description:
  When an instance with attached Cinder volumes is block migrated, the
  Cinder volumes are block migrated along with it. If they exist on
  shared storage, then they end up being copied, over the network, from
  themselves to themselves. At a minimum, this is horribly slow and de-
  sparses a sparse volume; at worst, this could cause massive data
  corruption.

  More details at http://lists.openstack.org/pipermail/openstack-
  dev/2014-June/038152.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406486] Re: Suspending an instance fails when using vnic_type=direct

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406486

Title:
  Suspending an instance fails when using vnic_type=direct

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in python-glanceclient:
  New

Bug description:
  When launching an instance with a pre-created port with 
binding:vnic_type='direct' and suspending the instance 
  fails with error  'NoneType' object has no attribute 'encode'

  Nova compute log:
  http://paste.openstack.org/show/155141/

  Version
  ==
  openstack-nova-common-2014.2.1-3.el7ost.noarch
  openstack-nova-compute-2014.2.1-3.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  python-nova-2014.2.1-3.el7ost.noarch

  How to Reproduce
  ===
  # neutron port-create tenant1-net1 --binding:vnic-type direct
  # nova boot --flavor m1.small --image rhel7 --nic port-id= vm1
  # nova suspend 
  # nova show 

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1406486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399244] Re: rbd resize revert fails

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399244

Title:
  rbd resize revert fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  In Ceph CI, the revert-resize server test is failing.  It appears that
  revert_resize() does not take shared storage into account and deletes
  the orignal volume, which causes the start of the original instance to
  fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407664] Re: Race: instance nw_info cache is updated to empty list because of nova/neutron event mechanism

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407664

Title:
  Race: instance nw_info cache is updated to empty list because of
  nova/neutron event mechanism

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This applies only when the nova/neutron event reporting mechanism is
  enabled.

  Boot instance, like this:
  nova boot --image xxx --flavor xxx --nic port-id=xxx test_vm

  The booting instance is successful, but instance nw_info cache is empty.
  This is a probabilistic problem, not always can be reproduced.

  After analysis the booting instance and nova/neutron event mechanism workflow,
  I get the reproduce timeline:

  1. neutronv2.api.allocate_for_instance when booting instance
  2. neutronclient.update_port trigger neutron network_change event
  3. nova get the port change event, start to dispose event
  4. instance.get_by_uuid in external_instance_event , at this time 
instance.nw_info_cache is empty,
  because nw_info cache hadn't been added into db in booting instance thread.
  5. booting instance thread start to save the instance nw_info cache into db.
  6. event disposing thread start to update instance nw_info cache to empty.

  Face this issue in Juno.
  I add some breakpoints in order to reproduce this bug in my devstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408480] Re: PciDevTracker passes context module instead of instance

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408480

Title:
  PciDevTracker passes context module instead of instance

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Currently, the code in the PciDevTracker.__init__() method of
  nova/pci/manager.py reads:

  ```
  def __init__(self, node_id=None): 

  
  """Create a pci device tracker.   

  


  
  If a node_id is passed in, it will fetch pci devices information  

  
  from database, otherwise, it will create an empty devices list

  
  and the resource tracker will update the node_id information later.   

  
  """   

  


  
  super(PciDevTracker, self).__init__() 

  
  self.stale = {}   

  
  self.node_id = node_id

  
  self.stats = stats.PciDeviceStats()   

  
  if node_id:   

  
  self.pci_devs = list( 

  
  objects.PciDeviceList.get_by_compute_node(context, node_id))  

  
  else: 

  
  self.pci_devs = []

  
  self._initial_instance_usage()  
  ```

  The problem is that in the call to
  `objects.PciDeviceList.get_by_compute_node(context, node_id)`, there
  is no local value for the 'context' parameter, so as a result, the
  context module defined in the imports is what is passed.

  Instead, the parameter should be changed to
  `context.get_admin_context()`.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417745] Re: Cells connecting pool tracking

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417745

Title:
  Cells connecting pool tracking

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Cells has a rpc driver for inter-cell communication.  A
  oslo.messaging.Transport is created for each inter-cell message.

  In previous versions of oslo.messaging, connection pool references
  were maintained within the RabbitMQ driver abstraction in
  oslo.messaging.  As of oslo.messaging commit
  f3370da11a867bae287d7f549a671811e8b399ef, the application must
  maintain a single reference to Transport or references to the
  connection pool will be lost.

  The net effect of this is that cells constructs a new broker
  connection pool  (and a connection) on every message sent between
  cells.  This is leaking references to connections.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414065] Re: Nova can lose track of running VM if live migration raises an exception

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414065

Title:
  Nova can lose track of running VM if live migration raises an
  exception

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  There is a fairly serious bug in VM state handling during live
  migration, with a result that if libvirt raises an error *after* the
  VM has successfully live migrated to the target host, Nova can end up
  thinking the VM is shutoff everywhere, despite it still being active.
  The consequences of this are quite dire as the user can then manually
  start the VM again and corrupt any data in shared volumes and the
  like.

  The fun starts in the _live_migration method in
  nova.virt.libvirt.driver, if the 'migrateToURI2' method fails *after*
  the guest has completed migration.

  At start of migration, we see an event received by Nova for the new
  QEMU process starting on target host

  2015-01-23 15:39:57.743 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Started"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 1 from (pid=19494) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  
  Upon migration completion we see CPUs start running on the target host

  2015-01-23 15:40:02.794 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Resumed"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 1 from (pid=19494) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  And finally an event saying that the QEMU on the source host has
  stopped

  2015-01-23 15:40:03.629 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Stopped"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 4 from (pid=23081) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  
  It is the last event that causes the trouble.  It causes Nova to mark the VM 
as shutoff at this point.

  Normally the '_live_migrate' method would succeed and so Nova would
  then immediately & explicitly mark the guest as running on the target
  host.   If an exception occurrs though, this explicit update of VM
  state doesn't happen so Nova considers the guest shutoff, even though
  it is still running :-(

  
  The lifecycle events from libvirt have an associated "reason", so we could 
see that the shutoff event from libvirt corresponds to a migration being 
completed, and so not mark the VM as shutoff in Nova.  We would also have to 
make sure the target host processes the 'resume' event upon migrate completion.

  An safer approach though, might be to just mark the VM as in an ERROR
  state if any exception occurs during migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415768] Re: the pci deivce assigned to instance is inconsistent with DB record when restarting nova-compute

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415768

Title:
  the pci deivce assigned to instance is inconsistent with DB record
  when restarting nova-compute

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  After restarting nova-compute process, I found that the pci device
  assigned to instance in libvirt.xml was different with the record in
  'pci_devices' DB table.

  Every time nova-compute was restarted, pci_tracker.allocations was
  reset to empty dict, it didn't contain the pci devices had been
  allocated to instances, so some pci devices would be reallocated to
  the instances, and record these pci into DB, maybe they was
  inconsistent with the libvirt.xml.

  IOW, nova-compute would reallocated the pci device for the instance
  with pci request when restarting.

  See details:
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/resource_tracker.py#n347

  This is a probabilistic problem, not always can be reproduced. If the
  instance have a lot of pci devices, it happen more.

  Face this bug in kilo master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1415768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411383] Re: Arista ML2 plugin incorrectly syncs with EOS

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1411383

Title:
  Arista ML2 plugin incorrectly syncs with EOS

Status in neutron:
  In Progress
Status in neutron juno series:
  Fix Released

Bug description:
  The Arista ML2 plugin periodically compares the data in the Neutron DB
  with EOS to ensure that they are in sync. If EOS reboots, then the
  data might be out of sync and the plugin needs to push data from
  Neutron DB to EOS. As an optimization, the plugin gets and stores the
  time at which the data on EOS was modified. Just before a sync, the
  plugin compares the stored time with the timestamp on EOS and performs
  the sync only if the timestamps differ.

  Due to a bug, the timestamp is incorrectly stored in the plugin
  because of which the sync never takes place and the only way to force
  a sync is to restart the neutron server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1411383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408176] Re: Nova instance not boot after host restart but still show as Running

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408176

Title:
  Nova instance not boot after host restart but still show as Running

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  The nova host lost power and after restarted, the previous running instance 
is still shown in 
  "Running" state but actually not started:

  root@allinone-controller0-esenfmnxzcvk:~# nova list
  
+--++++-+---+
  | ID   | Name   | 
Status | Task State | Power State | Networks  |
  
+--++++-+---+
  | 13d9eead-191e-434e-8813-2d3bf8d3aae4 | alexcloud-controller0-rr5kdtqmv7qz | 
ACTIVE | -  | Running | default-net=172.16.0.15, 30.168.98.61 |
  
+--++++-+---+
  root@allinone-controller0-esenfmnxzcvk:~# ps -ef |grep -i qemu
  root  95513  90291  0 14:46 pts/000:00:00 grep --color=auto -i qemu

  
  Please note the resume_guests_state_on_host_boot flag is False. Log file is 
attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416933] Re: Race condition in Ha router updating port status

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416933

Title:
  Race condition in Ha router updating port status

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  When L2 agent call 'get_devices_details_list', the ports in this l2 agent 
will firstly be updated to BUILD, then 'update_device_up' will update them to 
ACTIVE, but for a Ha router which has two l3 agents, there will be race 
condition.
  reproduce progress(not always happen, but much time):
  1.  'router-interface-add' add a subnet to Ha router
  2.  'router-gateway-set' set router gateway
  the gateway port status sometimes will always be BUILD

  in 'get_device_details', the port status will be update, but I think
  if a port status is ACTIVE and port['admin_state_up'] is True, this
  port should not be update,

  def get_device_details(self, rpc_context, **kwargs):
  ..
  ..
  new_status = (q_const.PORT_STATUS_BUILD if port['admin_state_up']
else q_const.PORT_STATUS_DOWN)
  if port['status'] != new_status:
  plugin.update_port_status(rpc_context,
port_id,
new_status,
host)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419785] Re: VMware: running a redundant nova compute deletes running instances

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1419785

Title:
  VMware: running a redundant nova compute deletes running instances

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When running more than one nova compute configured for the same
  cluster, rebooting one of the computes will delete all running
  instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1419785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420032] Re: remove_router_interface doesn't scale well with dvr routers

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420032

Title:
  remove_router_interface doesn't scale well with dvr routers

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  With dvr enabled , neutron remove-router-interface significantly
  degrades in response time as the number of l3_agents and the number of
  routers increases.   A significant contributor to the poor performance
  is due to check_ports_exist_on_l3agent.  The call to
  get_subnet_ids_on_router returns an empty list since the port has
  already been deleted by this point.  The empty subnet list is then
  used as a filter to the subsequent call core_plugin.get_ports which
  unexpectedly returns all ports instead of an empty list of ports.
  Erroneously looping through the entire list of ports is the biggest
  contributor to the poor scalability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1420032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423427] Re: tempest baremetal client is creating node with wrong property keys

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423427

Title:
  tempest baremetal client is creating node with wrong property keys

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  A new test has been added to tempest to stress the os-baremetal-nodes
  API extension.  The test periodically fails in the gate with traceback
  in n-api log:

  [req-01dcd35b-55f4-4688-ba18-7fe0c6defd52 
BaremetalNodesAdminTestJSON-1864409967 BaremetalNodesAdminTestJSON-1481542636] 
Caught error: 'cpus'
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
977, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
902, in _call_app
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 749, in __call__
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack content_type, body, 
accept)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 814, in _process_stack
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 904, in dispatch
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/compute/contrib/baremetal_nodes.py", 
line 123, in index
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack 'cpus': 
inode.properties['cpus'],
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack KeyError: 'cpus'
  2015-02-19 01:37:41.910 2521 TRACE nova.api.openstack

  This hits only periodically and only when another tempest baremetal
  test is running in parallel to the new test.  The other tests
  (tempest.api.baremetal.*) create some nodes in Ironic with node
  properties that are not the standard resource properties the
  nova->ironic proxy expects (from
  nova/api/openstack/compute/contrib/baremetal_nodes.py:201):

for inode in ironic_nodes:
  node = {'id': inode.uuid,
  'interfaces': [],
  'host': 'IRONIC MANAGED',
  'task_state': inode.provision_state,
  'cpus': inode.properties['cpus'],
  'memory_mb': inode.properties['memory_mb'],
  

[Yahoo-eng-team] [Bug 1423772] Re: During live-migration Nova expects identical IQN from attached volume(s)

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423772

Title:
  During live-migration Nova expects identical IQN from attached
  volume(s)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When attempting to do a live-migration on an instance with one or more
  attached volumes, Nova expects that the IQN will be exactly the same
  as it's attaching the volume(s) to the new host. This conflicts with
  the Cinder settings such as "hp3par_iscsi_ips" which allows for
  multiple IPs for the purpose of load balancing.

  Example:
  An instance on Host A has a volume attached at 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  An attempt is made to migrate the instance to Host B.
  Cinder sends the request to attach the volume to the new host.
  Cinder gives the new host 
"/dev/disk/by-path/ip-10.10.120.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  Nova looks for the volume on the new host at the old location 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"

  The following error appears in n-cpu in this case:

  2015-02-19 17:09:05.574 ERROR nova.virt.libvirt.driver [-] [instance: 
b6fa616f-4e78-42b1-a747-9d081a4701df] Live Migration failure: Failed to open 
file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
  listener.cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
212, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5426, in 
_live_migration
  recover_method(context, instance, dest, block_migration)
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5393, in 
_live_migration
  CONF.libvirt.live_migration_bandwidth)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, 
in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, 
in proxy_call
  rv = execute(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, 
in execute
  six.reraise(c, e, tb)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, 
in tworker
  rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1582, in 
migrateToURI2
  if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
  libvirtError: Failed to open file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Removing descriptor: 3

  
  When looking at the nova DB, this is the state of block_device_mapping prior 
to the migration attempt:

  mysql> select * from block_device_mapping where 
instance_uuid='b6fa616f-4e78-42b1-a747-9d081a4701df' and deleted=0;
  
+-+-+++-+---+-+--+-+---+---+--+-+-+--+--+-+--++--+
  | created_at  | updated_at  | deleted_at | id | device_name | 
delete_on_termination | snapshot_id | volume_id| 
volume_size | no_device | connection_info   




 

[Yahoo-eng-team] [Bug 1433049] Re: libvirt-xen: Instance status in nova may be different than real status

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1433049

Title:
  libvirt-xen: Instance status in nova may be different than real status

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Tempest test
  ServerActionsTestJSON:test_resize_server_confirm_from_stopped and
  other similaire test from_stopped may fail with libvirt-xen driver due
  to the test timing out on waiting the instance to be SHUTOFF, but nova
  is reporting the instance to be ACTIVE.

  Traceback (most recent call last):
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
230, in test_resize_server_confirm_from_stopped
  self._test_resize_server_confirm(stop=True)
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
209, in _test_resize_server_confirm
  self.client.wait_for_server_status(self.server_id, expected_status)
File "/opt/stack/tempest/tempest/services/compute/json/servers_client.py", 
line 183, in wait_for_server_status
  ready_wait=ready_wait)
File "/opt/stack/tempest/tempest/common/waiters.py", line 93, in 
wait_for_server_status
  raise exceptions.TimeoutException(message)
  tempest.exceptions.TimeoutException: Request timed out
  Details: (ServerActionsTestJSON:test_resize_server_confirm_from_stopped) 
Server a0f07187-4e08-4664-ad48-a03cffb87873 failed to reach SHUTOFF status and 
task state "None" within the required time (196 s). Current status: ACTIVE. 
Current task state: None.

  
  From nova log, I could see "VM Started (Lifecycle Event)" being reported 
while the instance is shutdown and being resized.

  After tracking done this bug, the issue may comes from the Change-Id
  I690d3d700ab4d057554350da143ff77d78b509c6, Delay STOPPED lifecycle
  event for Xen domains.

  A way to reproduce would be to run this script on a Xen machine using a small 
Cirros instance:
  nova boot --image 'cirros-0.3.2-x86_64-uec' --flavor 42 instance
  nova stop instance
  # wait sometime (around 20s) so we start with SHUTDOWN state
  nova start instance
  nova stop instance
  nova resize instance 84
  nova resize-confirm instance
  # check new state, should be shutoff.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1433049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427343] Re: missing entry point for cisco apic topology agent

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427343

Title:
  missing entry point for cisco apic topology agent

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  Cisco APIC topology agent [0] is missing the entry point.

  
  [0] neutron.plugins.ml2.drivers.cisco.apic.apic_topology.ApicTopologyService

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424096] Re: DVR routers attached to shared networks aren't being unscheduled from a compute node after deleting the VMs using the shared net

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424096

Title:
  DVR routers attached to shared networks aren't being unscheduled from
  a compute node after deleting the VMs using the shared net

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  As the administrator, a DVR router is created and attached to a shared
  network. The administrator also created the shared network.

  As a non-admin tenant, a VM is created with the port using the shared
  network.  The only VM using the shared network is scheduled to a
  compute node.  When the VM is deleted, it is expected the qrouter
  namespace of the DVR router is removed.  But it is not.  This doesn't
  happen with routers attached to networks that are not shared.

  The environment consists of 1 controller node and 1 compute node.

  Routers having the problem are created by the administrator attached
  to shared networks that are also owned by the admin:

  As the administrator, do the following commands on a setup having 1
  compute node and 1 controller node:

  1. neutron net-create shared-net -- --shared True
 Shared net's uuid is f9ccf1f9-aea9-4f72-accc-8a03170fa242.

  2. neutron subnet-create --name shared-subnet shared-net 10.0.0.0/16

  3. neutron router-create shared-router
  Router's UUID is ab78428a-9653-4a7b-98ec-22e1f956f44f.

  4. neutron router-interface-add shared-router shared-subnet
  5. neutron router-gateway-set  shared-router public

  
  As a non-admin tenant (tenant-id: 95cd5d9c61cf45c7bdd4e9ee52659d13), boot a 
VM using the shared-net network:

  1. neutron net-show shared-net
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | id  | f9ccf1f9-aea9-4f72-accc-8a03170fa242 |
  | name| shared-net   |
  | router:external | False|
  | shared  | True |
  | status  | ACTIVE   |
  | subnets | c4fd4279-81a7-40d6-a80b-01e8238c1c2d |
  | tenant_id   | 2a54d6758fab47f4a2508b06284b5104 |
  +-+--+

  At this point, there are no VMs using the shared-net network running
  in the environment.

  2. Boot a VM that uses the shared-net network: nova boot ... --nic 
net-id=f9ccf1f9-aea9-4f72-accc-8a03170fa242 ... vm_sharednet
  3. Assign a floating IP to the VM "vm_sharednet"
  4. Delete "vm_sharednet". On the compute node, the qrouter namespace of the 
shared router (qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f) is left behind

  stack@DVR-CN2:~/DEVSTACK/manage$ ip netns
  qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f
   ...

  
  This is consistent with the output of "neutron l3-agent-list-hosting-router" 
command.  It shows the router is still being hosted on the compute node.

  
  $ neutron l3-agent-list-hosting-router ab78428a-9653-4a7b-98ec-22e1f956f44f
  
+--+++---+
  | id   | host   | admin_state_up | 
alive |
  
+--+++---+
  | 42f12eb0-51bc-4861-928a-48de51ba7ae1 | DVR-Controller | True   | 
:-)   |
  | ff869dc5-d39c-464d-86f3-112b55ec1c08 | DVR-CN2| True   | 
:-)   |
  
+--+++---+

  Running the "neutron l3-agent-router-remove" command removes the
  qrouter namespace from the compute node:

  $ neutron l3-agent-router-remove ff869dc5-d39c-464d-86f3-112b55ec1c08 
ab78428a-9653-4a7b-98ec-22e1f956f44f
  Removed router ab78428a-9653-4a7b-98ec-22e1f956f44f from L3 agent

  stack@DVR-CN2:~/DEVSTACK/manage$ ip netns
  stack@DVR-CN2:~/DEVSTACK/manage$

  This is a workaround to get the qrouter namespace deleted from the
  compute node. The L3-agent scheduler should have removed the router
  from the compute node when the VM is deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438040] Re: fdb entries can't be removed when a VM is migrated

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438040

Title:
  fdb entries can't be removed when a VM is migrated

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  this problem can be reprodeced as bellow:
  1. vm A in computeA, vm B in computeB, l2 pop enable;
  2. vmB continue ping vmA 
  3. live migrate vmA to computeB 
  4. when live-migrate finish, vmB ping vmA failed

  the reason is bellow, in l2pop driver, when vmA migrate to computeB, port 
status change form BUILD to ACTIVE,
  it add the port to  self.migrated_ports when port status is ACTIVE, but 
'remove_fdb_entries' in port status is BUILD :
  def update_port_postcommit(self, context):
  ...
  ...
  elif (context.host != context.original_host
  and context.status == const.PORT_STATUS_ACTIVE
  and not self.migrated_ports.get(orig['id'])):
  # The port has been migrated. We have to store the original
  # binding to send appropriate fdb once the port will be set
  # on the destination host
  self.migrated_ports[orig['id']] = (
  (orig, context.original_host))
  elif context.status != context.original_status:
  if context.status == const.PORT_STATUS_ACTIVE:
  self._update_port_up(context)
  elif context.status == const.PORT_STATUS_DOWN:
  fdb_entries = self._update_port_down(
  context, port, context.host)
  self.L2populationAgentNotify.remove_fdb_entries(
  self.rpc_ctx, fdb_entries)
  elif context.status == const.PORT_STATUS_BUILD:
  orig = self.migrated_ports.pop(port['id'], None)
  if orig:
  original_port = orig[0]
  original_host = orig[1]
  # this port has been migrated: remove its entries from fdb
  fdb_entries = self._update_port_down(
  context, original_port, original_host)
  self.L2populationAgentNotify.remove_fdb_entries(
  self.rpc_ctx, fdb_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429093] Re: nova allows to boot images with virtual size > root_gb specified in flavor

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429093

Title:
  nova allows to boot images with virtual size > root_gb specified in
  flavor

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  It's currently possible to boot an instance from a QCOW2 image, which
  has the virtual size larger than root_gb size specified in the given
  flavor.

  Steps to reproduce:

  1. Download a QCOW2 image (e.g. Cirros -
  https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-i386-disk.img)

  2. Resize the image to a reasonable size:

  qemu-img resize cirros-0.3.0-i386-disk.img +9G

  3. Upload the image to Glance:

  glance image-create --file cirros-0.3.0-i386-disk.img --name cirros-
  10GB --is-public True --progress --container-format bare --disk-format
  qcow2

  4. Boot the first VM using a 'correct' flavor (root_gb > virtual size
  of the Cirros image), e.g. m1.small (root_gb = 20)

  nova boot --image cirros-10GB --flavor m1.small demo-ok

  5. Wait until the VM boots.

  6. Boot the second VM using an 'incorrect' flavor (root_gb < virtual
  size of the Cirros image), e.g. m1.tiny (root_gb = 1):

  nova boot --image cirros-10GB --flavor m1.tiny demo-should-fail

  7. Wait until the VM boots.

  Expected result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ERROR state (failed with FlavorDiskTooSmall)

  Actual result:

  demo-ok is in ACTIVE state
  demo-should-fail is in ACTIVE state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438638] Re: Hyper-V: Compute Driver doesn't start if there are instances with no VM Notes

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438638

Title:
  Hyper-V: Compute Driver doesn't start if there are instances with no
  VM Notes

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The Nova Hyper-V Compute Driver cannot start if there are instances
  with Notes = None. This can be caused by the users, by manually
  altering the VM Notes or if there are VMs created by the users.

  Logs: http://paste.openstack.org/show/197681/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1438638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431404] Re: Don't trace when @reverts_task_state fails on InstanceNotFound

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431404

Title:
  Don't trace when @reverts_task_state fails on InstanceNotFound

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  This change https://review.openstack.org/#/c/163515/ added a warning
  when the @reverts_task_state decorator in the compute manager fails
  rather than just pass, because we were getting KeyErrors and never
  noticing them which broke the decorator.

  However, now we're tracing on InstanceNotFound which is a normal case
  if we're deleting the instance after a failure (tempest will delete
  the instance immediately after failures when tearing down a test):

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIHJldmVydCB0YXNrIHN0YXRlIGZvciBpbnN0YW5jZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MjYxNzA3MDE2OTV9

  http://logs.openstack.org/98/163798/1/check/check-tempest-dsvm-
  postgres-
  full/6eff665/logs/screen-n-cpu.txt.gz#_2015-03-12_13_11_36_304

  2015-03-12 13:11:36.304 WARNING nova.compute.manager 
[req-a5f3b37e-19e9-4e1d-9be7-bbb9a8e7f4c1 DeleteServersTestJSON-706956764 
DeleteServersTestJSON-535578435] [instance: 
6de2ad51-3155-4538-830d-f02de39b4be3] Failed to revert task state for instance. 
Error: Instance 6de2ad51-3155-4538-830d-f02de39b4be3 could not be found.
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner
  return func(*args, **kwargs)

File "/opt/stack/new/nova/nova/conductor/manager.py", line 134, in 
instance_update
  columns_to_join=['system_metadata'])

File "/opt/stack/new/nova/nova/db/api.py", line 774, in 
instance_update_and_get_original
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 143, in wrapper
  return f(*args, **kwargs)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 2395, in 
instance_update_and_get_original
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 181, in wrapped
  return f(*args, **kwargs)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 2434, in 
_instance_update
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 1670, in 
_instance_get_by_uuid
  raise exception.InstanceNotFound(instance_id=uuid)

  InstanceNotFound: Instance 6de2ad51-3155-4538-830d-f02de39b4be3 could
  not be found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430239] Re: Hyper-V: *DataRoot paths are not set for instances

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430239

Title:
  Hyper-V: *DataRoot paths are not set for instances

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  The Nova Hyper-V Driver does not set the Data Root path locations for
  the newly created instances to the same location as the instances. By
  default. Hyper-V sets the location on C:\. This can cause issues for
  small C:\ partitions, as some of these files can be large.

  The path locations that needs to be set are: ConfigurationDataRoot,
  LogDataRoot, SnapshotDataRoot, SuspendDataRoot, SwapFileDataRoot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439302] Re: "FixedIpNotFoundForAddress: Fixed ip not found for address None." traces in gate runs

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439302

Title:
  "FixedIpNotFoundForAddress: Fixed ip not found for address None."
  traces in gate runs

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Seeing this quite a bit in normal gate runs:

  http://logs.openstack.org/53/169753/2/check/check-tempest-dsvm-full-
  ceph/07dcae0/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-04-01_14_34_37_110

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRml4ZWRJcE5vdEZvdW5kRm9yQWRkcmVzczogRml4ZWQgaXAgbm90IGZvdW5kIGZvciBhZGRyZXNzIE5vbmUuXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIgQU5EIHRhZ3M6XCJtdWx0aWxpbmVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyNzkwMjQ0NTg4OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] FixedIpNotFoundForAddress: Fixed ip not 
found for address None.
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] Traceback (most recent call last):
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] executor_callback))
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] executor_callback)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] result = func(ctxt, **new_args)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 186, in 
deallocate_for_instance
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] super(FloatingIP, 
self).deallocate_for_instance(context, **kwargs)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/network/manager.py", line 558, in 
deallocate_for_instance
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] instance=instance)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/network/manager.py", line 214, in deallocate_fixed_ip
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] context, address, 
expected_attrs=['network'])
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/objects/base.py", line 161, in wrapper
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d] args, kwargs)
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]
  2015-04-01 14:34:37.110 674 TRACE nova.compute.manager [instance: 
2fc5caf4-8ff7-45bc-940f-c13d696a1d9d]   File 
"/opt/stack/new/nova/nova/conductor/rpcapi.py", line 329, in object_class_action

[Yahoo-eng-team] [Bug 1439817] Re: IP set full error in kernel log

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439817

Title:
  IP set full error in kernel log

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  This is appearing in some logs upstream:
  http://logs.openstack.org/73/170073/1/experimental/check-tempest-dsvm-
  neutron-full-non-
  isolated/ac882e3/logs/kern_log.txt.gz#_Apr__2_13_03_06

  And it has also been reported by andreaf in IRC as having been
  observed downstream.

  Logstash is not very helpful as this manifests only with a job currently in 
the experimental queue.
  As said job runs in non-isolated mode, accruing of elements in the IPset 
until it reaches saturation is onet things that might need to be investigated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1439817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439857] Re: live-migration failure leave the port to BUILD state

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439857

Title:
  live-migration failure leave the port to BUILD state

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  I've set up a lab where live migration can occur in block mode

  It seems that if I leave the default config, block live-migration
  fails;

  I can see that the port is left in BUILD state after the failure, but
  the VM is still running on the source host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1439857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439223] Re: misleading power state logging in _sync_instance_power_state

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439223

Title:
  misleading power state logging in _sync_instance_power_state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Commit aa1792eb4c1d10e9a192142ce7e20d37871d916a added more verbose
  logging of the various database and hypervisor states when
  _sync_instance_power_state is called (which can be called from
  handle_lifecycle_event - triggered by the libvirt driver, or from the
  _sync_power_states periodic task).

  The current instance power_state from the DB's POV and the power state
  from the hypervisor's POV (via handle_lifecycle_event) can be
  different and if they are different, the database is updated with the
  power_state from the hypervisor and the local db_power_state variable
  is updated to be the same as the vm_power_state (from the hypervisor).

  Then later, the db_power_state value is used to log the different
  states when we have conditions like the database says an instance is
  running / active but the hypervisor says it's stopped, so we call
  compute_api.stop().

  We should be logging the original database power state and the
  power_state from the hypervisor to more accurately debug when we're
  out of sync.

  This is already fixed on master:
  https://review.openstack.org/#/c/159263/

  I'm reporting the bug so it this can be backported to stable/juno.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438331] Re: Nova fails to delete rbd image, puts guest in to ERROR state

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438331

Title:
  Nova fails to delete rbd image, puts guest in to ERROR state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When removing guests  that have been booted on Ceph, Nova will
  occasionally put guests in to ERROR state with the following ...

  Reported to the controller:

  | fault| {"message": "error removing image", 
"code": 500, "details": "  File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 314, in 
decorated_function |
  |  | return function(self, context, 
*args, **kwargs)
   |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2525, in 
terminate_instance |
  |  | do_terminate_instance(instance, 
bdms)   
  |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py\", line 
272, in inner|
  |  | return f(*args, **kwargs)

 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2523, in 
do_terminate_instance  |
  |  | 
self._set_instance_error_state(context, instance)   
  |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py\", line 
82, in __exit__   |
  |  | six.reraise(self.type_, 
self.value, self.tb)
  |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2513, in 
do_terminate_instance  |
  |  | self._delete_instance(context, 
instance, bdms, quotas) 
   |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/hooks.py\", line 131, in inner  
   |
  |  | rv = f(*args, **kwargs)  

 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2482, in 
_delete_instance   |
  |  | quotas.rollback()

 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py\", line 
82, in __exit__   |
  |  | six.reraise(self.type_, 
self.value, self.tb)
  |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2459, in 
_delete_instance   |
  |  | self._shutdown_instance(context, 
instance, bdms) 
 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2389, in 
_shutdown_instance

[Yahoo-eng-team] [Bug 1442494] Re: test_add_list_remove_router_on_l3_agent race-y for dvr

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442494

Title:
  test_add_list_remove_router_on_l3_agent race-y for dvr

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Logstash:

  message:"in test_add_list_remove_router_on_l3_agent" AND build_name
  :"check-tempest-dsvm-neutron-dvr"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gdGVzdF9hZGRfbGlzdF9yZW1vdmVfcm91dGVyX29uX2wzX2FnZW50XCIgQU5EIGJ1aWxkX25hbWU6XCJjaGVjay10ZW1wZXN0LWRzdm0tbmV1dHJvbi1kdnJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyODY0OTgxNDY3MSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  Change [1], enabled by [2], exposed an intermittent failure when
  determining whether an agent is eligible for binding or not.

  [1] https://review.openstack.org/#/c/154289/
  [2] https://review.openstack.org/#/c/165246/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1442494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440762] Re: Rebuild an instance with attached volume fails

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1440762

Title:
  Rebuild an instance with attached volume fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When trying to rebuild an instance with attached volume, it fails with
  the errors:

  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher 
libvirtError: Failed to terminate process 22913 with SIGKILL: Device or 
resource busy
  2015-02-04 08:41:27.477 22000 TRACE oslo.messaging.rpc.dispatcher
  <180>Feb 4 08:43:12 node-2 nova-compute Periodic task is updating the host 
stats, it is trying to get disk info for instance-0003, but the backing 
volume block device was removed by concurrent operations such as resize. Error: 
No volume Block Device Mapping at path: 
/dev/disk/by-path/ip-192.168.0.4:3260-iscsi-iqn.2010-10.org.openstack:volume-82ba5653-3e07-4f0f-b44d-a946f4dedde9-lun-1
  <182>Feb 4 08:43:13 node-2 nova-compute VM Stopped (Lifecycle Event)

  The full log of rebuild process is here:
  http://paste.openstack.org/show/166892/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1440762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444630] Re: nova-compute should stop handling virt lifecycle events when it's shutting down

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444630

Title:
  nova-compute should stop handling virt lifecycle events when it's
  shutting down

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This is a follow on to bug 1293480 and related to bug 1408176 and bug
  1443186.

  There can be a race when rebooting a compute host where libvirt is
  shutting down guest VMs and sending STOPPED lifecycle events up to
  nova compute which then tries to stop them via the stop API, which
  sometimes works and sometimes doesn't - the compute service can go
  down with a vm_state of ACTIVE and task_state of powering-off which
  isn't resolve on host reboot.

  Sometimes the stop API completes and the instance is stopped with
  power_state=4 (shutdown) in the nova database.  When the host comes
  back up and libvirt restarts, it starts up the guest VMs which sends
  the STARTED lifecycle event and nova handles that but because the
  vm_state in the nova database is STOPPED and the power_state is 1
  (running) from the hypervisor, nova things it started up unexpectedly
  and stops it:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2015.1.0rc1#n6145

  So nova shuts the running guest down.

  Actually the block in:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2015.1.0rc1#n6145

  conflicts with the statement in power_state.py:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/power_state.py?id=2015.1.0rc1#n19

  "The hypervisor is always considered the authority on the status
  of a particular VM, and the power_state in the DB should be viewed as a
  snapshot of the VMs's state in the (recent) past."

  Anyway, that's a different issue but the point is when nova-compute is
  shutting down it should stop accepting lifecycle events from the
  hypervisor (virt driver code) since it can't really reliably act on
  them anyway - we can leave any sync up that needs to happen in
  init_host() in the compute manager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444397] Re: single allowed address pair rule can exhaust entire ipset space

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444397

Title:
  single allowed address pair rule can exhaust entire ipset space

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The hash type used by the ipsets is 'ip' which explodes a CIDR into
  every member address (i.e. 10.100.0.0/16 becomes 65k entries). The
  allowed address pairs extension allows CIDRs so a single allowed
  address pair set can exhaust the entire IPset and break the security
  group rules for a tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443186] Re: rebooted instances are shutdown by libvirt lifecycle event handling

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443186

Title:
  rebooted instances are shutdown by libvirt lifecycle event handling

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This is a continuation of bug 1293480 (which created bug 1433049).
  Those were reported against xen domains with the libvirt driver but we
  have a recreate with CONF.libvirt.virt_type=kvm, see the attached logs
  and reference the instance with uuid
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78.

  In this case, we're running a stress test of soft rebooting 30 active
  instances at once.  Because of a delay in the libvirt lifecycle event
  handling, they are all shutdown after the reboot operation is complete
  and the instances go from ACTIVE to SHUTDOWN.

  This was reported to me against Icehouse code but the recreate is
  against Juno code with patch:

  https://review.openstack.org/#/c/169782/

  For better logging.

  Snippets from the log:

  2015-04-10 21:02:38.234 11195 AUDIT nova.compute.manager [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Rebooting instance

  2015-04-10 21:03:47.703 11195 DEBUG nova.compute.manager [req-
  8219e6cf-dce8-44e7-a5c1-bf1879e155b2 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Received event network-vif-
  unplugged-0b2c7633-a5bc-4150-86b2-c8ba58ffa785 external_instance_event
  /usr/lib/python2.6/site-packages/nova/compute/manager.py:6285

  2015-04-10 21:03:49.299 11195 INFO nova.virt.libvirt.driver [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance shutdown successfully.

  2015-04-10 21:03:53.251 11195 DEBUG nova.compute.manager [req-
  521a6bdb-172f-4c0c-9bef-855087d7dff0 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Received event network-vif-
  plugged-0b2c7633-a5bc-4150-86b2-c8ba58ffa785 external_instance_event
  /usr/lib/python2.6/site-packages/nova/compute/manager.py:6285

  2015-04-10 21:03:53.259 11195 INFO nova.virt.libvirt.driver [-]
  [instance: 9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance running
  successfully.

  2015-04-10 21:03:53.261 11195 INFO nova.virt.libvirt.driver [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance soft rebooted
  successfully.

  **
  At this point we have successfully soft rebooted the instance
  **

  now we get a lifecycle event from libvirt that the instance is
  stopped, since we're no longer running a task we assume the hypervisor
  is correct and we call the stop API

  2015-04-10 21:04:01.133 11195 DEBUG nova.virt.driver [-] Emitting event 
 
Stopped> emit_event /usr/lib/python2.6/site-packages/nova/virt/driver.py:1298
  2015-04-10 21:04:01.134 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] VM Stopped (Lifecycle Event)
  2015-04-10 21:04:01.245 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Synchronizing instance power state after 
lifecycle event "Stopped"; current vm_state: active, current task_state: None, 
current DB power_state: 1, VM power_state: 4
  2015-04-10 21:04:01.334 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] During _sync_instance_power_state the DB 
power_state (1) does not match the vm_power_state from the hypervisor (4). 
Updating power_state in the DB to match the hypervisor.
  2015-04-10 21:04:01.463 11195 WARNING nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance shutdown by itself. Calling the 
stop API. Current vm_state: active, current task_state: None, original DB 
power_state: 1, current VM power_state: 4

  **
  now we get a lifecycle event from libvirt that the instance is started, but 
since the instance already has a task_state of 'powering-off' because of the 
previous stop API call from _sync_instance_power_state, we ignore it.
  **

  
  2015-04-10 21:04:02.085 11195 DEBUG nova.virt.driver [-] Emitting event 
 
Started> emit_event /usr/lib/python2.6/site-packages/nova/virt/driver.py:1298
  2015-04-10 21:04:02.086 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] VM Started (Lifecycle Event)
  2015-04-10 21:04:02.190 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Synchronizing instance power state after 
lifecycle event "Started"; current vm_state: active, current task_state: 
powering-off, current DB power_state: 4, VM power_state: 1
  2015-04-10 21:04:02.414 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] During sy

[Yahoo-eng-team] [Bug 1445412] Re: performance of plugin_rpc.get_routers is bad

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445412

Title:
  performance of plugin_rpc.get_routers is bad

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  the get_routers plugin call that the l3 agent makes is serviced by a
  massive amount of SQL queries that lead the whole process to take on
  the order of hundreds of milliseconds to process a request for 10
  routers.

  This will be a blanket bug for a series of performance improvements
  that will reduce that time by at least an order of magnitude.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447883] Re: Restrict netmask of CIDR to avoid DHCP resync is not enough

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447883

Title:
  Restrict netmask of CIDR to avoid DHCP resync is not enough

Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Restrict netmask of CIDR to avoid DHCP resync  is not enough.
  https://bugs.launchpad.net/neutron/+bug/1443798

  I'd like to prevent following case:

  [Condition]
- Plugin: ML2
- subnet with "enable_dhcp" is True

  [Operations]
  A. Specify "[]"(empty list) at "allocation_pools" when create/update-subnet
  ---
  $ $ curl -X POST -d '{"subnet": {"name": "test_subnet", "cidr": 
"192.168.200.0/24", "ip_version": 4, "network_id": 
"649c5531-338e-42b5-a2d1-4d49140deb02", "allocation_pools": []}}' -H 
"x-auth-token:$TOKEN" -H "content-type:application/json" 
http://127.0.0.1:9696/v2.0/subnets

  Then, the dhcp-agent creates own DHCP-port, it is reproduced resync
  bug.

  B. Create port and exhaust allocation_pools
  ---
  1. Create subnet with 192.168.1.0/24. And, DHCP-port has alteady created.
 gateway_ip: 192.168.1.1
 DHCP-port: 192.168.1.2
 allocation_pools{"start": 192.168.1.2, "end": 192.168.1.254}
 the number of availability ip_addresses is 252.

  2. Create non-dhcp port and exhaust ip_addresses in allocation_pools
 In this case, user creates a port 252 times.
 the number of availability ip_addresses is 0.

  3. User deletes the DHCP-port(192.168.1.2)
 the number of availability ip_addresses is 1.

  4. User creates a non-dhcp port.
 the number of availability ports are 0.
 Then, dhcp-agent tries to create DHCP-port. It is reproduced resync bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450624] Re: Nova waits for events from neutron on resize-revert that aren't coming

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450624

Title:
  Nova waits for events from neutron on resize-revert that aren't coming

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  On resize-revert, the original host was waiting for plug events from
  neutron before restarting the instance. These aren't sent since we
  don't ever unplug the vifs. Thus, we'll always fail like this:

  
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 134, in _dispatch_and_reply
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 177, in _dispatch
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 123, in _do_dispatch
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 88, in wrapped
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher payload)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 71, in wrapped
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 298, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher pass
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 284, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 348, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 326, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 314, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1447344] Re: DHCP agent: metadata network broken for DVR

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447344

Title:
  DHCP agent: metadata network broken for DVR

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  When the 'metadata network' feature is enabled, the DHCP at [1] will not 
spawn a metadata proxy for DVR routers.
  This should be fixed.

  [1]
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/dhcp/agent.py#n357

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450682] Re: nova unit tests failing with pbr 0.11

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450682

Title:
  nova unit tests failing with pbr 0.11

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  test_version_string_with_package_is_good breaks with the release of
  pbr 0.11

  
nova.tests.unit.test_versions.VersionTestCase.test_version_string_with_package_is_good
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/test_versions.py", line 33, in 
test_version_string_with_package_is_good
  version.version_string_with_package())
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: '5.5.5.5-g9ec3421' != 
'2015.2.0-g9ec3421'

  
  
http://logs.openstack.org/27/169827/8/check/gate-nova-python27/2009c78/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1450682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451931] Re: ironic password config not marked as secret

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451931

Title:
  ironic password config not marked as secret

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  The ironic config option for the password and auth token are not
  marked as secret so the values will get logged during startup in debug
  mode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453855] Re: HA routers may fail to send out GARPs when node boots

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453855

Title:
  HA routers may fail to send out GARPs when node boots

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  When a node boots, it starts the OVS and L3 agents. As an example, in
  RDO systemd unit files, these services have no dependency. This means
  that the L3 agent can start before the OVS agent. It can start
  configuring routers before the OVS agent finished syncing with the
  server and starts processing ovsdb monitor updates. The result is that
  when the L3 agent finishes configuring an HA router, it starts up
  keepalived, which under certain conditions will transition to master
  and send our gratuitous ARPs before the OVS agent finishes plugging
  its ports. This means that the gratuitous ARP will be lost, but with
  the router acting as master, this can cause black holes.

  Possible solutions:
  * Introduce systemd dependencies, but this has its set of intricacies and 
it's hard to solve the above problem comprehensively just with this approach.
  * Regardless, it's a good idea to use new keepalived flags:
  garp_master_repeat  how often the gratuitous ARP after MASTER state 
transition should be repeated?
  garp_master_refresh  Periodic delay in seconds sending gratuitous 
ARP while in MASTER state

  These will be configurable and have sane defaults.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456822] Re: AgentNotFoundByTypeHost exception logged when L3-agent starts up

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456822

Title:
  AgentNotFoundByTypeHost exception logged when L3-agent starts up

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  On my single-node devstack setup running the latest neutron code,
  there is one AgentNotFoundByTypeHost exception found for the L3-agent.
  However, the AgentNotFoundByTypeHost exception is not logged for the
  DHCP, OVS, or metadata agents.  This fact would point to a problem
  with how the L3-agent is starting up.

  Exception found in the L3-agent log:

  2015-05-19 11:27:57.490 23948 DEBUG oslo_messaging._drivers.amqpdriver [-] 
MSG_ID is 1d0f3e0a8a6744c9a9fc43eb3fdc5153 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311^M
  2015-05-19 11:27:57.550 23948 ERROR neutron.agent.l3.agent [-] Failed 
synchronizing routers due to RPC error^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 517, in 
fetch_and_sync_all_routers^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent routers = 
self.plugin_rpc.get_routers(context)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 91, in get_routers^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
router_ids=router_ids)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
156, in call^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
retry=self.retry)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
timeout=timeout, retry=retry)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 350, in send^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent retry=retry)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 341, in _send^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent raise result^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent RemoteError: 
Remote error: AgentNotFoundByTypeHost Agent with agent_type=L3 agent and 
host=DVR-Ctrl2 could not be found^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent [u'Traceback (most 
recent call last):\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply\nexecutor_callback))\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch\nexecutor_callback)\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  File 
"/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 81, in 
sync_routers\ncontext, host, router_ids))\n', u'  File 
"/opt/stack/neutron/neutron/db/l3_agentschedulers_db.py", line 290, in 
list_active_sync_routers_on_active_l3_agent\ncontext, 
constants.AGENT_TYPE_L3, host)\n', u'  File 
"/opt/stack/neutron/neutron/db/agents_db.py", line 197, in 
_get_agent_by_type_and_host\nhost=host)\n', u'AgentNotFoundByTypeHost: 
Agent with agent_ty
 pe=L3 agent and host=DVR-Ctrl2 could not be found\n'].^M

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456823] Re: address pair rules not matched in iptables counter-preservation code

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456823

Title:
  address pair rules not matched in iptables counter-preservation code

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  There are a couple of issues with the way our iptables rules are
  formed that prevent them from being matched in the code that looks at
  existing rules to preserve counters. So the counters end up getting
  wiped out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454434] Re: NoNetworkFoundInMaximumAllowedAttempts during concurrent network creation

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454434

Title:
  NoNetworkFoundInMaximumAllowedAttempts during concurrent network
  creation

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  NoNetworkFoundInMaximumAllowedAttempts  could be thrown if networks are 
created by multiple threads simultaneously.
  This is related to https://bugs.launchpad.net/bugs/1382064
  Currently DB logic works correctly, however 11 attempts that code does right 
now might not be enough in some rare unlucky cases under extreme concurrency.

  We need to randomize segmentation_id selection to avoid such issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1454434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457900] Re: dhcp_agents_per_network > 1 cause conflicts (NACKs) from dnsmasqs (break networks)

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457900

Title:
  dhcp_agents_per_network > 1 cause conflicts (NACKs) from dnsmasqs
  (break networks)

Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  If neutron was configured to have more than one DHCP agent per network
  (option dhcp_agents_per_network=2), it causes dnsmasq to reject leases
  of others dnsmasqs, creating mess and stopping instances to boot
  normally.

  Symptoms:

  Cirros (at the log):
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK
  Usage: /sbin/cirros-dhcpc 
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK
  Usage: /sbin/cirros-dhcpc 
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK

  Steps to reproduce:
  1. Set up neutron with VLANs and dhcp_agents_per_network=2 option in 
neutron.conf
  2. Set up two or more different nodes with enabled neutron-dhcp-agent
  3. Create VLAN neutron network with --enable-dhcp option
  4. Create instance with that network

  Expected behaviour:

  Instance recieve IP address via DHCP without problems or delays.

  Actual behaviour:

  Instance stuck in the network boot for long time.
  There are complains about NACKs in the logs of dhcp client.
  There are multiple NACKs on tcpdump on interfaces

  Additional analysis: It is very complex, so I attach example of two
  parallel tcpdumps from two dhcp namespaces in HTML format.

  
  Version: 2014.2.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1457900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459467] Re: port update multiple fixed IPs anticipating allocation fails with mac address error

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459467

Title:
  port update multiple fixed IPs anticipating allocation fails with mac
  address error

Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  A port update with multiple fixed IP specifications, one with a subnet
  ID and one with a fixed IP that conflicts with the address picked by
  the one specifying the subnet ID will result in a dbduplicate entry
  which is presented to the user as a mac address error.

  ~$ neutron port-update 7521786b-6c7f-4385-b5e1-fb9565552696 --fixed-ips 
type=dict 
{subnet_id=ca9dd2f0-cbaf-4997-9f59-dee9a39f6a7d,ip_address=42.42.42.42}
  Unable to complete operation for network 
0897a051-bf56-43c1-9083-3ac38ffef84e. The mac address None is in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461777] Re: Random NUMA cell selection can leave NUMA cells unused

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461777

Title:
  Random NUMA cell selection can leave NUMA cells unused

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  NUMA cell overcommit can leave NUMA cells unused

  When no NUMA configuration is defined for the guest (no flavor extra specs),
  nova identifies the NUMA topology of the host and tries to match the cpu 
  placement to a NUMA cell ("cpuset"). 

  The cpuset is selected randomly.
  pin_cpuset = random.choice(viable_cells_cpus) #nova/virt/libvirt/driver.py

  However, this can lead to NUMA cells not being used.
  This is particular noticeable when the flavor as the same number of vcpus 
  as the host NUMA cells and in the host CPUs are not overcommit 
(cpu_allocation_ratio = 1)

  ###
  Particular use case:

  Compute nodes with the NUMA topology:
  

  No CPU overcommit: cpu_allocation_ratio = 1
  Boot instances using a flavor with 8 vcpus. 
  (No NUMA topology defined for the guest in the flavor)

  In this particular case the host can have 2 instances. (no cpu overcommit)
  Both instances can be allocated (random) with the same cpuset from the 2 
options:
  8
  8

  As consequence half of the host CPUs are not used.

  
  ###
  How to reproduce:

  Using: nova 2014.2.2
  (not tested in trunk however the code path looks similar)

  1. set cpu_allocation_ratio = 1
  2. Identify the NUMA topology of the compute node
  3. Using a flavor with a number of vcpus that matches a NUMA cell in the 
compute node,
  boot instances until fill the compute node.
  4. Check the cpu placement "cpuset" used by the each instance.

  Notes: 
  - at this point instances can use the same "cpuset" leaving NUMA cells unused.
  - the selection of the cpuset is random. Different tries may be needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462973] Re: Network gateway flat connection fail because of None tenant_id

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462973

Title:
  Network gateway flat connection fail because of None tenant_id

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  The NSX-mh backend does not accept "None" values for tags.
  Tags are applied to all NSX-mh ports. In particular there is always a tag 
with the neutron tenant_id (q_tenant_id)

  _get_tenant_id_for_create now in admin context returns the tenant_id of the 
resource being created, if there is one.
  Otherwise still returns context.tenant_id.
  The default L2 gateway unfortunately does not have a tenant_id, but has the 
tenant_id attribute in its data structure.
  This means that _get_tenant_id_for_create will return None, and NSX-mh will 
reject the request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460220] Re: ipset functional tests assume system capability

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460220

Title:
  ipset functional tests assume system capability

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Production code uses ipset in the root namespace, but functional
  testing uses them in non-root namespaces. As it turns out, that
  functionality requires versions of the kernel and ipset not found in
  all versions of all distributions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470443] Re: ICMP rules not getting deleted on the hyperv network adapter extended acl set

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470443

Title:
  ICMP rules not getting deleted on the hyperv network adapter extended
  acl set

Status in networking-hyperv:
  Fix Committed
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released

Bug description:
  1. Create a security group with icmp rule
  2. spawn a vm with the above secuirty-grop-rule
  3. ping works from dhcp namespace 
  4. delete the rule from secuirty-group which will trigger the port-update
  5. however the rule is still there on compute for the vm even after 
port-update

  rootcause: icmp rule is created with locacal port as empty('').
  however during remove_security_rule the rule is matched for port "ANY" which 
does not match any rule, hence rule not deleted.
  solution: introduce the check to match empty loalport incase of deleting icmp 
rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1470443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463363] Re: NSX-mh: Decimal RXTX factor not honoured

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463363

Title:
  NSX-mh: Decimal RXTX factor not honoured

Status in neutron:
  In Progress
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  New
Status in vmware-nsx:
  Fix Committed

Bug description:
  A decimal RXTX factor, which is allowed by nova flavors, is not
  honoured by the NSX-mh plugin, but simply truncated to integer.

  To reproduce:

  * Create a neutron queue
  * Create a neutron net / subnet using the queue
  * Create a new flavor which uses an RXTX factor other than an integer value
  * Boot a VM on the net above using the flavor
  * View the NSX queue for the VM's VIF -- notice it does not have the RXTX 
factor applied correctly (for instance if it's 1.2 it does not multiply it at 
all, if it's 3.4 it applies a RXTX factor of 3)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471050] Re: VLANs are not configured on VM migration

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1471050

Title:
  VLANs are not configured on VM migration

Status in networking-arista:
  New
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released

Bug description:
  Whenever a VM migrates from one compute node to the other, the VLAN is
  not provisioned on the new compute node. The correct behaviour should
  be to remove the VLAN on the interface on the old switch interface and
  provision the VLAN on the new switch interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-arista/+bug/1471050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473556] Re: Error log is generated when API operation is PolicyNotAuthorized and returns 404

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473556

Title:
  Error log is generated when API operation is PolicyNotAuthorized and
  returns 404

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  neutron.policy module can raises webob.exc.HTTPNotFound when
  PolicyNotAuthorized is raised. In this case, neutron.api.resource
  outputs a log with error level. It should be INFO level as it occurs
  by user API requests.

  One of the easiest way is to reproduce this bug is as follows:

  (1) create a shared network by admin user
  (2) try to delete the shared network by regular user

  (A regular user can know a ID of the shared network, so the user can
  request to delete the shared network.)

  As a result we get the following log.
  It is confusing from the point of log monitoring.

  2015-07-11 05:28:33.914 DEBUG neutron.policy 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] Enforcing rules: ['delete_network', 
'delete_network:provider:physical_network
  ', 'delete_network:shared', 'delete_network:provider:network_type', 
'delete_network:provider:segmentation_id'] from (pid=1439) log_rule_list 
/opt/stack/neutron/neutron/policy.py:319
  2015-07-11 05:28:33.914 DEBUG neutron.policy 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] Failed policy check for 'delete_network' from 
(pid=1439) enforce /opt/stack/n
  eutron/neutron/policy.py:393
  2015-07-11 05:28:33.914 ERROR neutron.api.v2.resource 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] delete failed
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 495, in delete
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPNotFound(msg)
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource HTTPNotFound: The 
resource could not be found.
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475411] Re: During post_live_migration the nova libvirt driver assumes that the destination connection info is the same as the source, which is not always true

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475411

Title:
  During post_live_migration the nova libvirt driver assumes that the
  destination connection info is the same as the source, which is not
  always true

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The post_live_migration step for Nova libvirt driver is currently
  making a bad assumption about the source and destination connector
  information. The destination connection info may be different from the
  source which ends up causing LUNs to be left dangling on the source as
  the BDM has overridden the connection info with that of the
  destination.

  Code section where this problem is occuring:

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6036

  At line 6038 the potentially wrong connection info will be passed to
  _disconnect_volume which then ends up not finding the proper LUNs to
  remove (and potentially removes the LUNs for a different volume
  instead).

  By adding debug logging after line 6036 and then comparing that to the
  connection info of the source host (by making a call to Cinder's
  initialize_connection API) you can see that the connection info does
  not match:

  http://paste.openstack.org/show/TjBHyPhidRuLlrxuGktz/

  Version of nova being used:

  commit 35375133398d862a61334783c1e7a90b95f34cdb
  Merge: 83623dd b2c5542
  Author: Jenkins 
  Date:   Thu Jul 16 02:01:05 2015 +

  Merge "Port crypto to Python 3"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474785] Re: NSX-mh: agentless modes are available only for 4.1

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474785

Title:
  NSX-mh: agentless modes are available only for 4.1

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  DHCP and Metadata agentless modes are unfortunately available only in
  NSX-mh 4.1

  The version requirements for enabling the agentless mode should be
  amended

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483920] Re: NSX-mh: honour distributed_router config flag

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483920

Title:
  NSX-mh: honour distributed_router config flag

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  The VMware NSX plugin is not honoring the "router_distributed = True"
  flag when set in /etc/neutron.conf.  If the router_distributed
  parameter is set to "True", this should result in all routers that are
  created by tenants to default to distributed routers.  For example,
  the below CLI command should create a distributed logical router, but
  instead it creates a non-distributed router.

  neutron router-create --tenant-id $TENANT tenant-router

  In order to create a distributed router the "--distributed True"
  option must be passed, as show below.

  neutron router-create --tenant-id $TENANT csinfra-router-test
  --distributed True

  This happens because the NSX-mh plugin relies on the default value
  implemented in the backend rather than in the neutron configuration
  and should be changed to ensure this plugin behaves like the reference
  implementation

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482699] Re: glance requests from nova fail if there are too many endpoints in the service catalog

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482699

Title:
  glance requests from nova fail if there are too many endpoints in the
  service catalog

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Nova sends the entire serialized service catalog in the http header to
  glance requests:

  https://github.com/openstack/nova/blob/icehouse-
  eol/nova/image/glance.py#L136

  If you have a lot of endpoints in your service catalog this can make
  glance fail with "400 Header Line TooLong".

  Per bknudson: "Any service using the auth_token middleware has no use
  for the x-service-catalog header. All that auth_token middleware uses
  is x-auth-token. The auth_token middleware will actually strip the x
  -service-catalog from the request before it sends the request on to
  the rest of the pipeline, so the application will never see it."

  If glance needs the service catalog it will get it from keystone when
  it auths the tokens, so nova shouldn't be sending this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484738] Re: keyerror when refreshing instance security groups

2015-11-19 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484738

Title:
  keyerror when refreshing instance security groups

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  On a clean kilo install using source security groups I am seeing the
  following trace on boot and delete


  a2413f7] Deallocating network for instance _deallocate_network 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2098
  2015-08-14 09:46:06.688 11618 ERROR oslo_messaging.rpc.dispatcher 
[req-b8f44d34-96b2-4e40-ac22-15ccc6e44e59 - - - - -] Exception during message 
handling: 'metadata'
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6772, in 
refresh_instance_security_rules
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher return 
self.manager.refresh_instance_security_rules(ctxt, instance)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 434, in 
decorated_function
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher args = 
(_load_instance(args[0]),) + args[1:]
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 425, in 
_load_instance
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
expected_attrs=metas)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/instance.py", line 506, in 
_from_db_object
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
instance['metadata'] = utils.instance_meta(db_inst)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 817, in instance_meta
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher if 
isinstance(instance['metadata'], dict):
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher KeyError: 
'metadata'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485883] Re: NSX-mh: bad retry behaviour on controller connection issues

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485883

Title:
  NSX-mh: bad retry behaviour on controller connection issues

Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  If the connection to a NSX-mh controller fails - for instance because
  there is a network issue or the controller is unreachable - the
  neutron plugin keeps retrying the connection to the same controller
  until it times out, whereas a  correct behaviour would be to try to
  connect to the other controllers in the cluster.

  The issue can be reproduced with the following steps:
  1. Three Controllers in the cluster 10.25.56.223,10.25.101.133,10.25.56.222
  2. Neutron net-create dummy-1 from openstack cli
  3. Vnc into controller-1, ifconfig eth0 down
  4. Do neutron net-create dummy-2 from openstack cli

  The API requests were forwarded to 10.25.56.223 originally. eth0
  interface was shutdown on 10.25.56.223. But the requests continued to
  get forwarded to the same Controllers and timed out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328546] Re: Race condition when hard rebooting instance

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328546

Title:
  Race condition when hard rebooting instance

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  Condition for this to happen:
  ==

  1. Agent: neutron-linuxbridge-agent.
  2. Only 1 instance is running  on the hypervisor that belong to this network.
  3. Timing, it's a race condition after all ;-)

  Remarked behavior:
  

  After hard reboot instance end up in ERROR state and the nova-compute
  log an error saying that:

  Cannot get interface MTU on 'brqf9d0e8cf-bd': No such device

  What happen:
  ===

  When nova do a hard reboot, the instance is first destroyed,  which
  imply that the tap device is deleted from the linux bridge (which
  result to an empty bridge b/c of 2 condition above), than re-created
  afterward, but in between neutron-linuxbridge-agent may clean up this
  empty bridge as part of his remove_empty_bridges()[1], but for this
  error to happen neutron-linuxbridge-agent should do that after
  plug_vifs()[2] and before domain.createWithFlags() finish.

  [1]: 
https://github.com/openstack/neutron/blob/stable/icehouse/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py#L449.
  [2]: 
https://github.com/openstack/nova/blob/stable/icehouse/nova/virt/libvirt/driver.py#L3648-3656

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263665] Re: Number of GET requests grows exponentially when multiple rows are being updated in the table

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1263665

Title:
  Number of GET requests grows exponentially when multiple rows are
  being updated in the table

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  1. In Launch instance dialog select number of instances 10.
  2. Create 10 instances.
  2. While instances are being created and table rows are being updated the 
number of row update requests grows exponentially and a queue of pending 
requests still exists after all rows had beed updated.

  There is a request type:
  Request 
URL:http://donkey017/project/instances/?action=row_update&table=instances&obj_id=7c4eaf35-ebc0-4ea3-a702-7554c8c36cf2
  Request Method:GET

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1263665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274034] Re: Neutron firewall anti-spoofing does not prevent ARP poisoning

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274034

Title:
  Neutron firewall anti-spoofing does not prevent ARP poisoning

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Invalid
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  The neutron firewall driver 'iptabes_firawall' does not prevent ARP cache 
poisoning.
  When anti-spoofing rules are handled by Nova, a list of rules was added 
through the libvirt network filter feature:
  - no-mac-spoofing
  - no-ip-spoofing
  - no-arp-spoofing
  - nova-no-nd-reflection
  - allow-dhcp-server

  Actually, the neutron firewall driver 'iptabes_firawall' handles only
  MAC and IP anti-spoofing rules.

  This is a security vulnerability, especially on shared networks.

  Reproduce an ARP cache poisoning and man in the middle:
  - Create a private network/subnet 10.0.0.0/24
  - Start 2 VM attached to that private network (VM1: IP 10.0.0.3, VM2: 
10.0.0.4)
  - Log on VM1 and install ettercap [1]
  - Launch command: 'ettercap -T -w dump -M ARP /10.0.0.4/ // output:'
  - Log on too on VM2 (with VNC/spice console) and ping google.fr => ping is ok
  - Go back on VM1, and see the VM2's ping to google.fr going to the VM1 
instead to be send directly to the network gateway and forwarded by the VM1 to 
the gw. The ICMP capture looks something like that [2]
  - Go back to VM2 and check the ARP table => the MAC address associated to the 
GW is the MAC address of VM1

  [1] http://ettercap.github.io/ettercap/
  [2] http://paste.openstack.org/show/62112/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361211] Re: Hyper-V agent does not add new VLAN ids to the external port's trunked list on Hyper-V 2008 R2

2015-11-19 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361211

Title:
  Hyper-V agent does not add new VLAN ids to the external port's trunked
  list on Hyper-V 2008 R2

Status in networking-hyperv:
  Fix Released
Status in neutron:
  Invalid
Status in neutron juno series:
  Fix Released

Bug description:
  This issue affects Hyper-V 2008 R2 and does not affect Hyper-V 2012
  and above.

  The Hyper-V agent is correctly setting the VLAN ID and access mode
  settings on the vmswitch ports associated with a VM, but not on the
  trunked list associated with an external port. This is a required
  configuration.

  A workaround consists in setting the external port trunked list to
  contain all possible VLAN ids expected to be used in neutron's network
  configuration as provided by the following script:

  https://github.com/cloudbase/devstack-hyperv-
  incubator/blob/master/trunked_vlans_workaround_2008r2.ps1

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1361211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2015-11-19 Thread Alan Pevec
** Changed in: glance/juno
   Status: Fix Committed => Fix Released

** Changed in: keystone/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in Glance juno series:
  Fix Released
Status in heat:
  Fix Released
Status in heat kilo series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) juno series:
  Fix Released
Status in OpenStack Identity (keystone) kilo series:
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Won't Fix
Status in Sahara:
  Fix Committed

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print "RESPONSE %s-%d" % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete" % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s" % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-en

[Yahoo-eng-team] [Bug 1394900] Re: cinder disabled, many popups about missing volume service

2015-11-19 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1394900

Title:
  cinder disabled, many popups about missing volume service

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  In an enviroment, where cinder is disabled, I'm getting many error popups:
  "Error: Invalid service catalog service: volume"

  keystone catalog | grep Service
  Service: compute
  Service: network
  Service: computev3
  Service: image
  Service: metering
  Service: ec2
  Service: orchestration
  Service: identity

  This is seen in a juno environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1394900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >