[Yahoo-eng-team] [Bug 1375564] [NEW] unable to delete correct security rules

2014-09-29 Thread Amandeep
Public bug reported:

Description:
==

Version: Icehouse/stable

Try to add a security group rule, like:

stack@ThinkCentre:~$ nova secgroup-add-group-rule default default tcp
121 121

+-+---+-+--+--+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-+---+-+--+--+
| tcp | 121   | 121 |  | default  |
+-+---+-+--+--+
=
Now try to delete that group rule :

stack@ThinkCentre:~$ nova secgroup-delete-group-rule default default tcp 121 121
 
ERROR (AttributeError): 'NoneType' object has no attribute 'upper'

Now try to add invalid group rule :

stack@tcs-ThinkCentre:~$ nova secgroup-add-group-rule default default
tcp -1 -1

ERROR (BadRequest): Invalid port range -1:-1. Valid TCP ports should be between 
1-65535 (HTTP 400) (Request-ID: req-4fb01dfe-c0f6-4309-87fb-e61777e980e2)
=
Now try to add group rule of icmp protocol :

stack@ThinkCentre:~$ nova secgroup-add-group-rule default default icmp
-1 -1

+-+---+-+--+--+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-+---+-+--+--+
| icmp| -1| -1  |  | default  |
+-+---+-+--+--+

this group rule is added because port range define as( -1 to 255) for icmp.
===
Now try to add one more group rule as :
 
stack@ThinkCentre:~$ nova secgroup-add-group-rule default default icmp -2 -2

ERROR (BadRequest): Invalid port range -2:-2. For ICMP, the type:code must be 
valid (HTTP 400) (Request-ID: req-24432ef8-ef05-4d6c-bbfd-8c2d199340e0)
==
Now check the group rule list:

stack@ThinkCentre-M91P:~$ nova secgroup-list-rules default

+-+---+-+--+--+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-+---+-+--+--+
|
| tcp | 12| 12  |  | default  |
||   | |  | default  |
||   | |  | default  |
|  icmp | -1  | -1  |  | default  |
||   | |  |  |
+-+---+-+--+--+
=
Actual results:
Only valid rules can be created but not able to delete them.

Expected results:
There should be a way to delete them.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375564

Title:
  unable to delete correct security rules

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description:
  ==

  Version: Icehouse/stable

  Try to add a security group rule, like:

  stack@ThinkCentre:~$ nova secgroup-add-group-rule default default tcp
  121 121

  +-+---+-+--+--+
  | IP Protocol | From Port | To Port | IP Range | Source Group |
  +-+---+-+--+--+
  | tcp | 121   | 121 |  | default  |
  +-+---+-+--+--+
  =
  Now try to delete that group rule :

  stack@ThinkCentre:~$ nova secgroup-delete-group-rule default default tcp 121 
121
   
  ERROR (AttributeError): 'NoneType' object has no attribute 'upper'
  
  Now try to add invalid group rule :

  stack@tcs-ThinkCentre:~$ nova secgroup-add-group-rule default default
  tcp -1 -1

  ERROR (BadRequest): Invalid port range -1:-1. Valid TCP ports should be 
between 1-65535 (HTTP 400) (Request-ID: 
req-4fb01dfe-c0f6-4309-87fb-e61777e980e2)
  =
  Now try to add group rule of icmp protocol :

  stack@ThinkCentre:~$ nova secgroup-add-group-rule default default icmp
  -1 -1

  +-+---+-+--+--+
  | IP Protocol | From Port | To Port | IP Range | Source Group |
  +-+---+-+--+--+
  | icmp| -1| -1  |  | default  |
  +-+---+-+--+--+

  this group rule is added because port range define as( -1 to 255) for icmp.
  ===
  Now try to add one more group rule as :
   
  stack@ThinkCentre:~$ nova secgroup-add-group-rule default default icmp -2 -2

  ERROR (BadRequest): Invalid port range -2:-2. For ICMP, the type:code must be 
valid (HTTP 400) (Request-ID: req-24432ef8-ef05-4d6c-bbfd-8c2d199340e0)
  ==
  Now check the group rul

[Yahoo-eng-team] [Bug 1375531] [NEW] swap_volume does not save changes made in _connect_volume

2014-09-29 Thread Thang Pham
Public bug reported:

This is a bug reported by Nikola in a review -
https://review.openstack.org/#/c/121965/8/nova/virt/libvirt/driver.py.

swap_volume does not save changes for connection_info.  In swap_volume,
a call is made to _connect_volume, which modifies connection_info.  For
example, in the LibvirtISCSIVolumeDriver,
connection_info['data']['host_device'] = host_device.  However, this
update is not saved to the DB, such that any updates made within
swap_volume are lost.  We need to save any changes made in
_connect_volume to the DB, as it is done elsewhere in the code, e.g.
_get_guest_storage_config
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L3488.

** Affects: nova
 Importance: High
 Assignee: Thang Pham (thang-pham)
 Status: New


** Tags: volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375531

Title:
  swap_volume does not save changes made in _connect_volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  This is a bug reported by Nikola in a review -
  https://review.openstack.org/#/c/121965/8/nova/virt/libvirt/driver.py.

  swap_volume does not save changes for connection_info.  In
  swap_volume, a call is made to _connect_volume, which modifies
  connection_info.  For example, in the LibvirtISCSIVolumeDriver,
  connection_info['data']['host_device'] = host_device.  However, this
  update is not saved to the DB, such that any updates made within
  swap_volume are lost.  We need to save any changes made in
  _connect_volume to the DB, as it is done elsewhere in the code, e.g.
  _get_guest_storage_config
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L3488.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375519] [NEW] Cisco N1kv: Enable quota support in stable/icehouse

2014-09-29 Thread Abhishek Raut
Public bug reported:

With the quotas table being populated in stable/icehouse, the N1kv
plugin should be able to support quotas. Otherwise VMs end up in error
state.

** Affects: neutron
 Importance: Undecided
 Assignee: Abhishek Raut (abhraut)
 Status: New


** Tags: cisco

** Changed in: neutron
 Assignee: (unassigned) => Abhishek Raut (abhraut)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1375519

Title:
  Cisco N1kv: Enable quota support in stable/icehouse

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  With the quotas table being populated in stable/icehouse, the N1kv
  plugin should be able to support quotas. Otherwise VMs end up in error
  state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1375519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375480] [NEW] xenapi: migrate does not work with volume backed instance

2014-09-29 Thread Andrew Laski
Public bug reported:

If the migrate action is requested on a volume backed instance using the
xenapi driver the action will fail when it tries to snapshot the root
disk.  The migrate process assumes an image backed instance currently.

** Affects: nova
 Importance: Medium
 Assignee: Andrew Laski (alaski)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375480

Title:
  xenapi: migrate does not work with volume backed instance

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If the migrate action is requested on a volume backed instance using
  the xenapi driver the action will fail when it tries to snapshot the
  root disk.  The migrate process assumes an image backed instance
  currently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375478] [NEW] image metadata not copied when bdm v2 source=snapshot used

2014-09-29 Thread Andrew Laski
Public bug reported:

If an instance is booted using the block device mapping v2 API and
source=snapshot is used, no image metadata will be copied into the
instance system_metadata which can cause issues further in the boot
process.  Since properties like os_type are missed which may be used by
a virt driver.

** Affects: nova
 Importance: Medium
 Assignee: Andrew Laski (alaski)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375478

Title:
  image metadata not copied when bdm v2 source=snapshot used

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If an instance is booted using the block device mapping v2 API and
  source=snapshot is used, no image metadata will be copied into the
  instance system_metadata which can cause issues further in the boot
  process.  Since properties like os_type are missed which may be used
  by a virt driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375467] [NEW] db deadlock on _instance_update()

2014-09-29 Thread Mike Bayer
Public bug reported:

continuing from the same pattern as that of
https://bugs.launchpad.net/nova/+bug/1370191, we are also observing
unhandled deadlocks on derivatives of _instance_update(), such as the
stacktrace below.  As _instance_update() is a point of transaction
demarcation based on its use of get_session(), the @_retry_on_deadlock
should be added to this method.

Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
133, in _dispatch_and_reply\
incoming.message))\
File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
176, in _dispatch\
return self._do_dispatch(endpoint, method, ctxt, args)\
File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch\
result = getattr(endpoint, method)(ctxt, **new_args)\
File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 887, in 
instance_update\
service)\
File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py", line 139, 
in inner\
return func(*args, **kwargs)\
File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 130, in 
instance_update\
context, instance_uuid, updates)\
File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 742, in 
instance_update_and_get_original\
 columns_to_join=columns_to_join)\
File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 164, in 
wrapper\
return f(*args, **kwargs)\
File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2208, 
in instance_update_and_get_original\
 columns_to_join=columns_to_join)\
File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2299, 
in _instance_update\
session.add(instance_ref)\
File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 447, 
in __exit__\
self.rollback()\
File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 
58, in __exit__\
compat.reraise(exc_type, exc_value, exc_tb)\
File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 444, 
in __exit__\
self.commit()\
File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py", line 443, in _wrap\
_raise_if_deadlock_error(e, self.bind.dialect.name)\
File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py", line 427, in _raise_if_deadlock_error\
raise exception.DBDeadlock(operational_error)\
DBDeadlock: (OperationalError) (1213, \'Deadlock found when trying to get lock; 
try restarting transaction\') None None\

** Affects: nova
 Importance: Undecided
 Assignee: Mike Bayer (zzzeek)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375467

Title:
  db deadlock on _instance_update()

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  continuing from the same pattern as that of
  https://bugs.launchpad.net/nova/+bug/1370191, we are also observing
  unhandled deadlocks on derivatives of _instance_update(), such as the
  stacktrace below.  As _instance_update() is a point of transaction
  demarcation based on its use of get_session(), the @_retry_on_deadlock
  should be added to this method.

  Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 133, in _dispatch_and_reply\
  incoming.message))\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 176, in _dispatch\
  return self._do_dispatch(endpoint, method, ctxt, args)\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch\
  result = getattr(endpoint, method)(ctxt, **new_args)\
  File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 887, 
in instance_update\
  service)\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py", line 
139, in inner\
  return func(*args, **kwargs)\
  File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 130, 
in instance_update\
  context, instance_uuid, updates)\
  File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 742, in 
instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 164, 
in wrapper\
  return f(*args, **kwargs)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2208, 
in instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2299, 
in _instance_update\
  session.add(instance_ref)\
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 
447, in __exit__\
  self.rollback()\
  File "/usr/lib64/python2.7/site-packages/sqlal

[Yahoo-eng-team] [Bug 1374398] Re: Non admin user can update router port

2014-09-29 Thread Salvatore Orlando
I am tempted to mark it as invalid rather than won't fix - however it
would be a possible bug if the router interface was added by an admin
user, in which case I think the router port belongs to the admin rather
than the tenant.

Even in that case however, we'll have to discuss whether it's ok for an
admin to create the router port on behalf of the tenant and assign it to
the tenant itself.

The behaviour reported in this bug report depicts a tenant which messes up its 
own network configuration.
If a deployers wants to prevents scenarios like this, he should be able to add 
a policy where non-admin updates to port for which 
device_owner=network:router_interface


** Changed in: neutron
   Status: Won't Fix => Invalid

** Changed in: neutron
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374398

Title:
  Non admin user can update router port

Status in OpenStack Neutron (virtual network service):
  Incomplete

Bug description:
  Non admin user can update router's port 
http://paste.openstack.org/show/115575/.
  This can caused problems as server's won't get information about this change 
until next DHCP request so connectivity to and from this network will be lost.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267310] Re: port-list should not list the dhcp ports for normal user

2014-09-29 Thread Salvatore Orlando
The DHCP port belongs to the tenant, which is therefore entitles to see
it.

Deployers wishing to prevent that MIGHT configure policies to remove network 
ports from responses.
This is possible in theory, even if I would strongly advise against as this 
kind of settings end up making openstack applications not portable across 
deployments.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1267310

Title:
  port-list should not list the dhcp ports for normal user

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  with non-admin user, I can list the dhcp port, and If I tried to
  update the fixed ips of these dhcp ports, it does not reflect to
  dhcpagent at all, I mean the nic device's ip in the dhcp namesapce.

  So I think we should not allow normal user to view the dhcp port at the first 
place.
  [root@controller ~]# neutron port-list
  
+--+--+---+--+
  | id   | name | mac_address   | fixed_ips 
   |
  
+--+--+---+--+
  | 1a5a2236-9b66-4b6d-953d-664fad6be3bb |  | fa:16:3e:cf:52:b3 | 
{"subnet_id": "e38cf289-3b4b-4684-90e0-d44d2ee1cb90", "ip_address": "10.0.1.3"} 
 |
  | 381e244e-4012-4a49-83d3-f252fa4e41a1 |  | fa:16:3e:cf:94:bd | 
{"subnet_id": "e38cf289-3b4b-4684-90e0-d44d2ee1cb90", "ip_address": "10.0.1.7"} 
 |
  | 3bba05d3-10ec-49f1-9335-1103f791584b |  | fa:16:3e:fe:aa:6f | 
{"subnet_id": "e38cf289-3b4b-4684-90e0-d44d2ee1cb90", "ip_address": "10.0.1.6"} 
 |
  | 939d5696-0780-40c6-a626-a9a9df933553 |  | fa:16:3e:c7:5b:73 | 
{"subnet_id": "e38cf289-3b4b-4684-90e0-d44d2ee1cb90", "ip_address": "10.0.1.4"} 
 |
  | ad89d303-9e8c-43bb-a029-b341340a92bb |  | fa:16:3e:21:6d:98 | 
{"subnet_id": "c8e59b09-60d3-4996-8692-02334ee0e658", "ip_address": 
"192.168.230.3"} |
  | cb350109-39d3-444c-bc33-538c22415171 |  | fa:16:3e:f4:d3:e8 | 
{"subnet_id": "e38cf289-3b4b-4684-90e0-d44d2ee1cb90", "ip_address": "10.0.1.5"} 
 |
  | d1e79c7c-d500-475f-8e21-2c1958f0a136 |  | fa:16:3e:2d:c7:a1 | 
{"subnet_id": "e38cf289-3b4b-4684-90e0-d44d2ee1cb90", "ip_address": "10.0.1.1"} 
 |
  | ddc076f6-16aa-4f12-9745-2ac27dd5a38a |  | fa:16:3e:e0:04:44 | 
{"subnet_id": "e38cf289-3b4b-4684-90e0-d44d2ee1cb90", "ip_address": "10.0.1.8"} 
 |
  | f2a4df5c-e719-46cc-9bdb-bf9771a2c205 |  | fa:16:3e:01:73:5e | 
{"subnet_id": "e38cf289-3b4b-4684-90e0-d44d2ee1cb90", "ip_address": "10.0.1.2"} 
 |
  
+--+--+---+--+
  [root@controller ~]# neutron port-show 1a5a2236-9b66-4b6d-953d-664fad6be3bb
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | device_id | 
dhcpd3377d3c-a0d1-5d71-9947-f17125c357bb-20f45603-b76a-4a89-9674-0127e39fc895   
|
  | device_owner  | network:dhcp
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {"subnet_id": 
"e38cf289-3b4b-4684-90e0-d44d2ee1cb90", "ip_address": "10.0.1.3"} |
  | id| 1a5a2236-9b66-4b6d-953d-664fad6be3bb
|
  | mac_address   | fa:16:3e:cf:52:b3   
|
  | name  | 
|
  | network_id| 20f45603-b76a-4a89-9674-0127e39fc895
|
  | security_groups   | 
|
  | status| ACTIVE  
|
  | tenant_id | c8a625a4c71b401681e25e3ad294b255
|
  
+---

[Yahoo-eng-team] [Bug 1323715] Re: network tests fail on policy check after upgrade from icehouse to master (juno)

2014-09-29 Thread Adam Gandelman
** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Fix Committed

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

** Changed in: neutron/icehouse
 Assignee: (unassigned) => Attila Fazekas (afazekas)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323715

Title:
  network tests fail on policy check after upgrade from icehouse to
  master (juno)

Status in Grenade - OpenStack upgrade testing:
  Invalid
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  Lots of tempest tests fail after upgrade

  http://logs.openstack.org/51/94351/3/check/check-grenade-dsvm-
  neutron/ac837a8/logs/testr_results.html.gz

  2014-05-26 21:47:20.109 364 INFO neutron.wsgi [req-
  7c96bf86-6845-4143-92d0-2bb32f5767d7 None] (364) accepted
  ('127.0.0.1', 60250)

  2014-05-26 21:47:20.110 364 DEBUG keystoneclient.middleware.auth_token [-] 
Authenticating user token __call__ 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:619
  2014-05-26 21:47:20.110 364 DEBUG keystoneclient.middleware.auth_token [-] 
Removing headers from request environment: 
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
 _remove_auth_headers 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:678
  2014-05-26 21:47:20.110 364 DEBUG keystoneclient.middleware.auth_token [-] 
Returning cached token _cache_get 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:1041
  2014-05-26 21:47:20.111 364 DEBUG keystoneclient.middleware.auth_token [-] 
Storing token in cache _cache_put 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:1151
  2014-05-26 21:47:20.111 364 DEBUG keystoneclient.middleware.auth_token [-] 
Received request from user: 47d465f7c2e44c048f63066dff93093c with project_id : 
d3e7af8cf42d4613beb315dc19444d40 and roles: _member_  _build_user_headers 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:940
  2014-05-26 21:47:20.112 364 DEBUG routes.middleware [-] No route matched for 
GET /ports.json __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:97
  2014-05-26 21:47:20.112 364 DEBUG routes.middleware [-] Matched GET 
/ports.json __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:100
  2014-05-26 21:47:20.112 364 DEBUG routes.middleware [-] Route path: 
'/ports{.format}', defaults: {'action': u'index', 'controller': >} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:102
  2014-05-26 21:47:20.112 364 DEBUG routes.middleware [-] Match dict: 
{'action': u'index', 'controller': >, 'format': u'json'} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
  2014-05-26 21:47:20.122 364 DEBUG neutron.policy 
[req-ee5c1651-9d0c-43c9-974d-a0c888c08468 None] Unable to find ':' as separator 
in tenant_id. __call__ /opt/stack/new/neutron/neutron/policy.py:243
  2014-05-26 21:47:20.123 364 ERROR neutron.policy 
[req-ee5c1651-9d0c-43c9-974d-a0c888c08468 None] Unable to verify 
match:%(tenant_id)s as the parent resource: tenant was not found
  2014-05-26 21:47:20.123 364 TRACE neutron.policy Traceback (most recent call 
last):
  2014-05-26 21:47:20.123 364 TRACE neutron.policy   File 
"/opt/stack/new/neutron/neutron/policy.py", line 239, in __call__
  2014-05-26 21:47:20.123 364 TRACE neutron.policy parent_res, parent_field 
= do_split(separator)
  2014-05-26 21:47:20.123 364 TRACE neutron.policy   File 
"/opt/stack/new/neutron/neutron/policy.py", line 234, in do_split
  2014-05-26 21:47:20.123 364 TRACE neutron.policy separator, 1)
  2014-05-26 21:47:20.123 364 TRACE neutron.policy ValueError: need more than 1 
value to unpack
  2014-05-26 21:47:20.123 364 TRACE neutron.policy 
  2014-05-26 21:47:20.123 364 ERROR neutron.api.v2.resource 
[req-ee5c1651-9d0c-43c9-974d-a0c888c08468 None] index failed
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 87, in resource
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 309, in index
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource return 
self._items(requ

[Yahoo-eng-team] [Bug 1330955] Re: Lock wait timeout exceeded while updating status for floatingips

2014-09-29 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Released => Fix Committed

** Changed in: neutron/icehouse
Milestone: 2014.1.2 => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330955

Title:
  Lock wait timeout exceeded while updating status for floatingips

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  Lock timeout occurred when updating floating IP.

  2014-06-15 12:50:41.052 15781 TRACE neutron.openstack.common.rpc.amqp
  OperationalError: (OperationalError) (1205, 'Lock wait timeout
  exceeded; try restarting transaction') 'UPDATE floatingips SET
  status=%s WHERE floatingips.id = %s' ('ACTIVE', 'a030bb1e-31f0-42d7
  -84fc-520856f0ee66')

  This is probably introduced in Icehouse with:
  https://review.openstack.org/#/c/66866/

  More info at Red Hat bugzilla:
  https://bugzilla.redhat.com/show_bug.cgi?id=1109577

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1330955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299331] Re: There isn't effect when attach/detach interface for paused instance

2014-09-29 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Released => Fix Committed

** Changed in: nova/icehouse
Milestone: 2014.1.2 => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299331

Title:
  There isn't effect when attach/detach interface for paused instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  $ nova boot --flavor 1 --image 76ae1239-0973-44cf-9051-0e1bc8f41cdd
  --nic net-id=a15cfbed-86d8-4660-9593-46447cb9464e vm1

  $ nova list
  
+--+--+++-+---+
  | ID   | Name | Status | Task State | Power 
State | Networks  |
  
+--+--+++-+---+
  | f7e2877d-c7f5-4493-89d4-c68e9839a7ff | vm1  | ACTIVE | -  | Running 
| private=10.0.0.22 |
  
+--+--+++-+---+

  $ brctl show
  bridge name   bridge id   STP enabled interfaces
  br-eth0   .fe989d8bd148   no  
  br-ex .8a1d06d8854e   no  
  br-ex2.4a98bdebe544   no  
  br-int.229ad5053a41   no  
  br-tun.2e58a2f0e047   no  
  docker0   8000.   no  
  lxcbr08000.   no  
  qbr0ad6a86e-d98000.9e5491dd719a   no  
qvb0ad6a86e-d9
tap0ad6a86e-d9

  
  $ neutron port-list
  
+--+--+---++
  | id   | name | mac_address   | fixed_ips 
 |
  
+--+--+---++
  | 0ad6a86e-d967-424e-9bf5-e6821cc0cd0d |  | fa:16:3e:3a:3e:5a | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": 
"10.0.0.22"}   |
  | 1e6bed8d-aece-4d3e-abcc-3ad7957d6d72 |  | fa:16:3e:9e:dc:83 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.12"} |
  | 5f522a9a-2856-4a95-8bd8-c354c00abf0f |  | fa:16:3e:01:47:43 | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": "10.0.0.1"} 
   |
  | 6226f6d3-3814-469c-bf50-8c99dfec481e |  | fa:16:3e:46:0e:35 | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": "10.0.0.2"} 
   |
  | a3f2ab1c-a634-446d-8885-d7d8e5978fa1 |  | fa:16:3e:cf:02:d6 | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": 
"10.0.0.20"}   |
  | c10390a9-6f84-44f5-8a17-91cb330a9e12 |  | fa:16:3e:41:7c:34 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.15"} |
  | c814425c-be1a-4c06-a54b-1788c7c6fb31 |  | fa:16:3e:f5:fc:d3 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.2"}  |
  | ebd874b7-43e6-4d18-b0ed-f86bb349d8b9 |  | fa:16:3e:e6:b5:09 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.19"} |
  
+--+--+---++

  
  $ nova pause vm1

  $ nova interface-detach vm1 0ad6a86e-d967-424e-9bf5-e6821cc0cd0d

  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | f7e2877d-c7f5-4493-89d4-c68e9839a7ff | vm1  | PAUSED | -  | Paused  
|  |
  
+--+--+++-+--+

  $ brctl show
  bridge name   bridge id   STP enabled interfaces
  br-eth0   .fe989d8bd148   no  
  br-ex .8a1d06d8854e   no  
  br-ex2.4a98bdebe544   no  
  br-int.229ad5053a41   no  
  br-tun.2e58a2f0e047   no  
  docker0   8000.   no  
  lxcbr08000.   no  

  
  But tap still alive

  $ ifconfig|grep tap0ad6a86e-d9
  ta

[Yahoo-eng-team] [Bug 1220256] Re: Hyper-V driver needs tests for WMI WQL instructions

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided => Medium

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1220256

Title:
  Hyper-V driver needs tests for WMI WQL instructions

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  The Hyper-V Nova driver uses mainly WMI to access the hypervisor and OS 
features. 
  Additional tests can be added in this area.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1220256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291364] Re: _destroy_evacuated_instances fails randomly with high number of instances

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291364

Title:
  _destroy_evacuated_instances fails randomly with high number of
  instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  In our production environment (2013.2.1), we're facing a random error
  thrown while starting nova-compute in Hyper-V nodes.

  The following exception is thrown while calling
  '_destroy_evacuated_instances':

  16:30:58.802 7248 ERROR nova.openstack.common.threadgroup [-] 'NoneType' 
object is not iterable
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  (...)
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup   File 
"C:\Python27\lib\site-packages\nova\compute\manager.py", line 532, in 
_get_instances_on_driver
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
name_map = dict((instance['name'], instance) for instance in instances)
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
TypeError: 'NoneType' object is not iterable

  Full trace: http://paste.openstack.org/show/73243/

  Our first guess is that this problem is related with number of
  instances in our deployment (~3000), they're all fetched in order to
  check evacuated instances (as Hyper-V is not implementing
  "list_instance_uuids").

  In the case of KVM, this error is not happening as it's using a
  smarter method to get this list based on the UUID of the instances.

  Although this is being reported using Hyper-V, it's a problem that
  could occur in other drivers not implementing "list_instance_uuids"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292102] Re: AttributeError: 'NoneType' object has no attribute 'obj' (driver.obj.release_segment(session, segment))

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1292102

Title:
  AttributeError: 'NoneType' object has no attribute 'obj'
  (driver.obj.release_segment(session, segment))

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed
Status in “neutron” package in Ubuntu:
  Fix Released
Status in “neutron” source package in Trusty:
  Triaged
Status in “neutron” source package in Utopic:
  Fix Released

Bug description:
  When trying to delete a network, I hit a traceback.

  ubuntu@neutron01:~$ neutron port-list

  ubuntu@neutron01:~$ neutron net-list
  +--+-+-+
  | id   | name| subnets |
  +--+-+-+
  | 822d2b2e-481f-4838-9fe5-459be7b10193 | int_net | |
  | ac498310-833b-42f2-9009-049cac145c71 | ext_net | |
  +--+-+-+

  ubuntu@neutron01:~$ neutron --debug net-delete int_net
  Request Failed: internal server error while processing your request.
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 527, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 80, in 
run_command
  return cmd.run(known_args)
File 
"/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 
510, in run
  obj_deleter(_id)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
112, in with_params
  ret = self.function(instance, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
354, in delete_network
  return self.delete(self.network_path % (network))
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1233, in delete
  headers=headers, params=params)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1222, in retry_request
  headers=headers, params=params)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1165, in do_request
  self._handle_fault_response(status_code, replybody)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1135, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
85, in exception_handler_v20
  message=error_dict)
  NeutronClientException: Request Failed: internal server error while 
processing your request.
  ubuntu@neutron01:~$

  
  /var/log/neutron/server.log
  
  2014-03-13 12:30:09.930 16624 ERROR neutron.api.v2.resource 
[req-cc63906f-1e13-4d22-becf-86979d80399f None] delete failed
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 438, in delete
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 479, in 
delete_network
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource 
self.type_manager.release_segment(session, segment)
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 104, 
in release_segment
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource 
driver.obj.release_segment(session, segment)
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource AttributeError: 
'NoneType' object has no attribute 'obj'
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: neutron-plugin-ml2 1:2014.1~b3-0ubuntu1
  ProcVersionSignature: Ubuntu 3.13.0-16.36-generic 3.13.5
  Uname: Linux 3.13.0-16-generic x86_64
  ApportVersion: 2.13.3-0ubuntu1
  Architecture: amd64
  Date: Thu Mar 13 12:36:03 2014
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen.linux
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8

[Yahoo-eng-team] [Bug 1288574] Re: backup operation should delete image if snapshot failed

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288574

Title:
  backup operation should delete image if snapshot failed

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  when we snapshot an instance, we will use @delete_image_on_error to delete 
any failed snapshot 
  however, the image will not be removed by backup code flow, it will be an 
issue if too many backup failed 
  at last ,all useful image will be removed and we have only 'error' image left 
in host

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288466] Re: Get servers REST reply does not have marker when default limit is reached

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided => High

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288466

Title:
  Get servers REST reply does not have marker when default limit is
  reached

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  Both the /servers and /servers/details APIs support pagination. When
  the request includes the "limit" parameters, then a "next" link is
  included in the reply if the number of servers that match the query is
  greater than or equal to the limit.

  The problem occurs when the caller does not include the limit
  parameter but the total number of servers is greater than or equal to
  the default "CONF.osapi_max_limit". When this occurs, the number of
  servers in the reply is "osapi_max" but there is no "next" link.
  Therefore, the caller cannot determine if there are any more servers
  and has no marker value such that they can retrieve the rest of the
  servers.

  The fix for this is to include the "next" link when the total number
  of servers is greater than or equal to the default limit, even if the
  "limit" parameter is not supplied.

  The documentation also says that the "next" link is required:
  http://docs.openstack.org/api/openstack-compute/2/content
  /Paginated_Collections-d1e664.html

  The fix appears to be in the _get_collection_links function in 
nova/api/openstack/common.py. The logic needs to be updated so that the "next"
  link is included if the total number of items returned equals the minimum of 
either the "limit" paramater or the "CONF.osapi_max_limit" value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291007] Re: device_path not available at detach time for boot from volume

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided => Medium

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291007

Title:
  device_path not available at detach time for boot from volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  When you do a normal volume attach to an existing VM and then detach it, the 
connection_info contains the following
  connection_info['data']['device_path'] at libvirt volume driver 
disconnect_volume(self, connection_info, mount_device) time.

  When you boot a VM from a volume, not an image, and then terminate the VM, 
the libvirt volume driver disconnect_volume's
  connection_info['data'] doesn't contain the 'device_path' key.   The libvirt 
volume driver's need this information to correctly disconnect the LUN from the 
kernel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242366] Re: volume attach failed if attach again to an pause to active VM

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1242366

Title:
  volume attach failed if attach again to an pause to active VM

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  Steps are as following:
  1) Create one VM
  2) Attach volume to the VM
  3) pause the VM
  4) detach the volume
  5) unpause the VM
  6) re-attch the VM to same device, nova compute throw exception

  2013-10-20 23:21:22.520 DEBUG amqp [-] Channel open from (pid=19728) _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/channel.py:420
  2013-10-20 23:21:22.520 ERROR nova.openstack.common.rpc.amqp 
[req-5f0d786e-1273-4611-b0a5-a787754c6bc8 admin admin] Exception during message 
handling
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/exception.py", line 90, in wrapped
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/exception.py", line 73, in wrapped
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 244, in decorated_function
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 230, in decorated_function
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 272, in decorated_function
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 259, in decorated_function
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 3649, in attach_volume
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp context, 
instance, mountpoint)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 3644, in attach_volume
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp mountpoint, 
instance)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 3690, in _attach_volume
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp connector)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 3680, in _attach_volume
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp 
encryption=encryption)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1107, in attach_volume
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp raise 
exception.DeviceIsBusy(device=disk_dev)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp DeviceIsBusy: 
The supplied device (vdb) is busy.
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp 
  ^C2013-10-20 23:21:24.871 INFO nova.openstack.common.service [-] Caught 
SIGINT, exiting

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1242366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290294] Re: Instance's XXX_resize dir never be deleted if we resize a pre-grizzly instance in havana

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290294

Title:
  Instance's XXX_resize dir never be deleted if we resize a pre-grizzly
  instance in havana

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  reproduce steps:
  1. create an instance under Folsom
  2. update nova to Havana
  3. resize the instance to another host
  4. confirm the resize
  5. examine the instance dir on source host

  you will find the instance-_resize dir exists there which was
  not deleted while confirming resize.

  the reason is that:
  in the _cleanup_resize in libvirt driver:
  def _cleanup_resize(self, instance, network_info):
  target = libvirt_utils.get_instance_path(instance) + "_resize"

  we get the instance path by using get_instance_path method in libvirt utils,
  but we check the original instance dir of pre-grizzly instances' before we 
return it,
  if this instance is a resized one which original instance dir exists on 
another host(the dest host),
  the wrong instance path with uuid will be returned, and then the `target` 
existing check will be failed,
  then the instance-_resize dir will never be deleted.

  def get_instance_path(instance, forceold=False, relative=False):
  """Determine the correct path for instance storage.

  This method determines the directory name for instance storage, while
  handling the fact that we changed the naming style to something more
  unique in the grizzly release.

  :param instance: the instance we want a path for
  :param forceold: force the use of the pre-grizzly format
  :param relative: if True, just the relative path is returned

  :returns: a path to store information about that instance
  """
  pre_grizzly_name = os.path.join(CONF.instances_path, instance['name'])
  if forceold or os.path.exists(pre_grizzly_name):  
### here we check the original instance dir, but if we have resized 
the instance to another host, this check will be failed, and a wrong dir with 
instance uuid will be returned.
  if relative:
  return instance['name']
  return pre_grizzly_name

  if relative:
  return instance['uuid']
  return os.path.join(CONF.instances_path, instance['uuid'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275173] Re: _translate_from_glance() can cause an unnecessary HTTP request

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided => Low

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275173

Title:
  _translate_from_glance() can cause an unnecessary HTTP request

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  I noticed when performing a "nova image-show" on a current (not
  deleted) image, two HTTP requests were issued. Why isn't the Image
  retrieved on the first GET request?

  In fact, it is. The problem lies in _extract_attributes(), called by
  _translate_from_glance(). This function loops through a list of
  expected attributes, and extracts them from the passed-in Image. The
  problem is that if the attribute 'deleted' is False, there won't be a
  'deleted_at' attribute in the Image. Not finding the attribute results
  in getattr() making another GET request (to try to find the "missing"
  attribute?). This is unnecessary of course, since it makes sense for
  the Image to not have that attribute set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1275173/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316373] Re: Can't force delete an errored instance with no info cache

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316373

Title:
  Can't force delete an errored instance with no info cache

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  Sometimes when an instance fails to launch for some reason when trying
  to delete it using nova delete or nova force-delete it doesn't work
  and gives the following error:

  This is when using cells but I think it possibly isn't cells related.
  Deleting is expecting an info cache no matter what. Ideally force
  delete should ignore all errors and delete the instance.

  
  2014-05-06 10:48:58.368 21210 ERROR nova.cells.messaging 
[req-a74c59d3-dc58-4318-87e8-0da15ca2a78d d1fa8867e42444cf8724e65fef1da549 
094ae1e2c08f4eddb444a9d9db71ab40] Error processing message locally: Info cache 
for instance bb07522b-d705-4fc8-8045-e12de2affe2e could not be found.
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging Traceback (most 
recent call last):
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/cells/messaging.py", line 200, in _process_locally
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/cells/messaging.py", line 1532, in _process_message_locally
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return 
fn(message, **message.method_kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/cells/messaging.py", line 894, in terminate_instance
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self._call_compute_api_with_obj(message.ctxt, instance, 'delete')
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/cells/messaging.py", line 855, in _call_compute_api_with_obj
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
instance.refresh(ctxt)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/base.py", line 151, in wrapper
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return fn(self, 
ctxt, *args, **kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/instance.py", line 500, in refresh
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self.info_cache.refresh()
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/base.py", line 151, in wrapper
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return fn(self, 
ctxt, *args, **kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/instance_info_cache.py", line 103, in refresh
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self.instance_uuid)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/base.py", line 112, in wrapper
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging result = fn(cls, 
context, *args, **kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/instance_info_cache.py", line 70, in 
get_by_instance_uuid
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
instance_uuid=instance_uuid)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
InstanceInfoCacheNotFound: Info cache for instance 
bb07522b-d705-4fc8-8045-e12de2affe2e could not be found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304968] Re: Nova cpu full of instance_info_cache stack traces due to attempting to send events about deleted instances

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304968

Title:
  Nova cpu full of instance_info_cache stack traces due to attempting to
  send events about deleted instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  The bulk of the stack traces in n-cpu is because emit_event is getting
  triggered on a VM delete, however by the time we get to emit_event the
  instance is deleted (we see this exception 183 times in this log -
  which means it's happening on *every* compute terminate) so when we
  try to look up the instance we hit the exception found here:

  @base.remotable_classmethod
  def get_by_instance_uuid(cls, context, instance_uuid):
  db_obj = db.instance_info_cache_get(context, instance_uuid)
  if not db_obj:
  raise exception.InstanceInfoCacheNotFound(
  instance_uuid=instance_uuid)
  return InstanceInfoCache._from_db_object(context, cls(), db_obj)

  A log trace of this interaction looks like this:

  
  2014-04-08 11:14:25.475 DEBUG nova.openstack.common.lockutils 
[req-fe9db989-416e-4da0-986c-e68336e3c602 TenantUsagesTestJSON-153098759 
TenantUsagesTestJSON-953946497] Semaphore / lock released 
"do_terminate_instance" inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:252
  2014-04-08 11:14:25.907 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore 
"75da98d7-bbd5-42a2-ad6f-7a66e38977fa" lock 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:168
  2014-04-08 11:14:25.907 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore / lock "do_terminate_instance" inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:248
  2014-04-08 11:14:25.907 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore "" 
lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:168
  2014-04-08 11:14:25.908 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore / lock "_clear_events" inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:248
  2014-04-08 11:14:25.908 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Semaphore / lock released "_clear_events" inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:252
  2014-04-08 11:14:25.928 AUDIT nova.compute.manager 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] [instance: 75da98d7-bbd5-42a2-ad6f-7a66e38977fa] 
Terminating instance
  2014-04-08 11:14:25.989 DEBUG nova.objects.instance 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Lazy-loading `system_metadata' on Instance uuid 
75da98d7-bbd5-42a2-ad6f-7a66e38977fa obj_load_attr 
/opt/stack/new/nova/nova/objects/instance.py:519
  2014-04-08 11:14:26.209 DEBUG nova.network.api 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Updating cache with info: [VIF({'ovs_interfaceid': 
None, 'network': Network({'bridge': u'br100', 'subnets': [Subnet({'ips': 
[FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [], 
'address': u'10.1.0.2'})], 'version': 4, 'meta': {u'dhcp_server': u'10.1.0.1'}, 
'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': u'8.8.4.4'})], 
'routes': [], 'cidr': u'10.1.0.0/24', 'gateway': IP({'meta': {}, 'version': 4, 
'type': u'gateway', 'address': u'10.1.0.1'})}), Subnet({'ips': [], 'version': 
None, 'meta': {u'dhcp_server': None}, 'dns': [], 'routes': [], 'cidr': None, 
'gateway': IP({'meta': {}, 'version': None, 'type': u'gateway', 'address': 
None})})], 'meta': {u'tenant_id': None, u'should_create_bridge': True, 
u'bridge_interface': u'eth0'}, 'id': u'9751787e-f41c-4299-be13-941c901f6d18', 
'label': u'private'}), 'devname': N
 one, 'qbh_params': None, 'meta': {}, 'details': {}, 'address': 
u'fa:16:3e:d8:87:38', 'active': False, 'type': u'bridge', 'id': 
u'db1ac48d-805a-45d3-9bb9-786bb5855673', 'qbg_params': None})] 
update_instance_cache_with_nw_info /opt/stack/new/nova/nova/network/api.py:74
  2014-04-08 11:14:27.661 2894 DEBUG nova.virt.driver [-] Emitting event 
 emit_event 
/opt/stack/new/nova/nova/virt/driver.py:1207
  2014-04-08 11:14:27.661 2894 INFO nova.compute.manager [-] Lifecycle event 1 

[Yahoo-eng-team] [Bug 1304593] Re: VMware: waste of disk datastore when root disk size of instance is 0

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304593

Title:
  VMware: waste of disk datastore when root disk size of instance is 0

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  When an instance has 0 root disk size an extra image is created on the
  datastore (uuid.0.vmdk that is identical to uuid.vmdk). This is only
  in the case of a linked clone image and wastes space on the datastore.
  The original image that is cached can be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321186] Re: nova can't show or delete queued image for AttributeError

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321186

Title:
  nova can't show or delete queued image for AttributeError

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  steps to reproduce:
  1. run "glance image-create" to create a queued image
  2. run "nova image-delete "

  it returns:
  Delete for image b31aa5dd-f07a-4748-8f15-398346887584 failed: The server has 
either erred or is incapable of performing the requested operation. (HTTP 500)

  the traceback in log file is:

  Traceback (most recent call last):
File "/opt/stack/nova/nova/api/openstack/__init__.py", line 125, in __call__
  return req.get_response(self.application)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
  return resp(environ, start_response)
File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 632, in __call__
  return self.app(env, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
  return resp(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
  return resp(environ, start_response)
File "/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in 
__call__
  response = self.app(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
  return resp(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/opt/stack/nova/nova/api/openstack/wsgi.py", line 917, in __call__
  content_type, body, accept)
File "/opt/stack/nova/nova/api/openstack/wsgi.py", line 983, in 
_process_stack
  action_result = self.dispatch(meth, request, action_args)
File "/opt/stack/nova/nova/api/openstack/wsgi.py", line 1067, in dispatch
  return method(req=request, **action_args)
File "/opt/stack/nova/nova/api/openstack/compute/images.py", line 139, in 
show
  image = self._image_service.show(context, id)
File "/opt/stack/nova/nova/image/glance.py", line 277, in show
  base_image_meta = _translate_from_glance(image)
File "/opt/stack/nova/nova/image/glance.py", line 462, in 
_translate_from_glance
  image_meta = _extract_attributes(image)
File "/opt/stack/nova/nova/image/glance.py", line 530, in 
_extract_attributes
  output[attr] = getattr(image, attr)
File 
"/opt/stack/python-glanceclient/glanceclient/openstack/common/apiclient/base.py",
 line 462, in __getattr__
  return self.__getattr__(k)
File 
"/opt/stack/python-glanceclient/glanceclient/openstack/common/apiclient/base.py",
 line 464, in __getattr__
  raise AttributeError(k)
  AttributeError: disk_format

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334142] Re: A server creation fails due to adding interface failure

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334142

Title:
  A server creation fails due to adding interface failure

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  http://logs.openstack.org/72/61972/27/gate/gate-tempest-dsvm-
  full/ed1ab55/logs/testr_results.html.gz

  pythonlogging:'': {{{
  2014-06-25 06:45:11,596 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 202 
POST http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers 0.295s
  2014-06-25 06:45:11,674 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 200 GET 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.077s
  2014-06-25 06:45:12,977 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 200 GET 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.300s
  2014-06-25 06:45:12,978 25675 INFO [tempest.common.waiters] State 
transition "BUILD/scheduling" ==> "BUILD/spawning" after 1 second wait
  2014-06-25 06:45:14,150 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 200 GET 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.171s
  2014-06-25 06:45:14,153 25675 INFO [tempest.common.waiters] State 
transition "BUILD/spawning" ==> "ERROR/None" after 3 second wait
  2014-06-25 06:45:14,221 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 400 
POST 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f/action
 0.066s
  2014-06-25 06:45:14,404 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 204 
DELETE 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.182s
  }}}

  Traceback (most recent call last):
File "tempest/api/compute/servers/test_delete_server.py", line 97, in 
test_delete_server_while_in_verify_resize_state
  resp, server = self.create_test_server(wait_until='ACTIVE')
File "tempest/api/compute/base.py", line 247, in create_test_server
  raise ex
  BadRequest: Bad request
  Details: {'message': 'The server could not comply with the request since it 
is either malformed or otherwise incorrect.', 'code': '400'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303536] Re: Live migration fails. XML error: CPU feature `wdt' specified more than once

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303536

Title:
  Live migration fails. XML error: CPU feature `wdt' specified more than
  once

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  Description of problem
  ---

  Live migration fails.
  libvirt says "XML error: CPU feature `wdt' specified more than once"

  Version
  -

  ii  libvirt-bin 1.2.2-0ubuntu2
amd64programs for the libvirt library
  ii  python-libvirt  1.2.2-0ubuntu1
amd64libvirt Python bindings
  ii  nova-compute1:2014.1~b3-0ubuntu2  
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm1:2014.1~b3-0ubuntu2  
all  OpenStack Compute - compute node (KVM)
  ii  nova-cert   1:2014.1~b3-0ubuntu2  
all  OpenStack Compute - certificate management

  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=14.04
  DISTRIB_CODENAME=trusty
  DISTRIB_DESCRIPTION="Ubuntu Trusty Tahr (development branch)"
  NAME="Ubuntu"
  VERSION="14.04, Trusty Tahr"

  
  Test env
  --

  A two node openstack havana on ubuntu 14.04. Migrating a instance to
  other node.

  
  Steps to Reproduce
  --
   - Migrate the instance

  
  And observe /var/log/nova/compute.log and /var/log/libvirt.log

  Actual results
  --

  /var/log/nova-conductor.log

  2014-04-04 13:42:17.128 3294 ERROR oslo.messaging._drivers.common [-] 
['Traceback (most recent call last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, in 
inner\nreturn func(*args, **kwargs)\n', '  File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 668, in 
migrate_server\nblock_migration, disk_over_commit)\n', '  File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 769, in 
_live_migrate\nraise exception.MigrationError(reason=ex)\n'
 , 'MigrationError: Migration error: Remote error: libvirtError XML error: CPU 
feature `wdt\' specified more than once\n[u\'Traceback (most recent call 
last):\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply\\nincoming.message))\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch\\nreturn self._do_dispatch(endpoint, method, ctxt, args)\\n\', 
u\'  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch\\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped\\n
payload)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\\nsix.reraise(self.type_, self.value, self.tb)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped\\n  
   return f(self, context, *args, **kw)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 272, in 
decorated_function\\ne, sys.exc_info())\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\\nsix.reraise(self.type_, self.value, self.tb)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 259, in 
decorated_function\\nreturn function(self, context, *args, **kwargs)\\n\', 
u\'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
4159, in check_can_live_migrate_destination\\nblock_migration, 
disk_over_commit)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4094, in 
check_can_live_migrate_destination\\n
self._compare_cpu(source_cpu_info)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4236, in 
_compare_cpu\\nLOG.error(m, {\\\'ret\\\': ret, \\\'u\\\': u})\\n\', u\'
   File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
li

[Yahoo-eng-team] [Bug 1296478] Re: The Hyper-V driver's list_instances() returns an empty result set on certain localized versions of the OS

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296478

Title:
  The Hyper-V driver's list_instances() returns an empty result set on
  certain localized versions of the OS

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  This issue is related to different values that MSVM_ComputerSystem's
  Caption property can have on different locales.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327497] Re: live-migration fails when FC multipath is used

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327497

Title:
  live-migration fails when FC multipath is used

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  I tried live-migration against VM with multipath access to FC bootable volume 
and FC data volume.
  After checking the code, I found the reason is that
  1. /dev/dm- is used, which is subject to change in the destination 
Compute Node since it is not unique across nodes
  2. multipath_id in connnection_info is not maintained properly and may be 
lost during connection refreshing

  The fix would be
  1. Like iSCSI multipath, use /dev/mapper/ instead of 
/dev/dm-
  2. Since multipath_id is unique for a volume no matter where it is attached, 
add logic to preserve this information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308058] Re: Cannot create volume from glance image without checksum

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308058

Title:
  Cannot create volume from glance image without checksum

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  It is no longer possible to create a volume from an image that does
  not have a checksum set.

  
https://github.com/openstack/cinder/commit/da13c6285bb0aee55cfbc93f55ce2e2b7d6a28f2
  - this patch removes the default of None from the getattr call.

  If this is intended it would be nice to see something more informative
  in the logs.

  2014-04-15 11:52:26.035 19000 ERROR cinder.api.middleware.fault 
[req-cf0f7b89-a9c1-4a10-b1ac-ddf415a28f24 c139cd16ac474d2184237ba837a04141 
83d5198d5f5a461798c6b843f57540d
  f - - -] Caught error: checksum
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault Traceback 
(most recent call last):
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/opt/stack/cinder/cinder/api/middleware/fault.py", line 75, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
req.get_response(self.application)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
application, catch_exc_info=False)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault app_iter 
= application(self.environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 615, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
self.app(env, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault response 
= self.app(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault resp = 
self.call_func(req, *args, **self.kwargs)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
self.func(req, *args, **kwargs)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/opt/stack/cinder/cinder/api/openstack/wsgi.py", line 895, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
content_type, body, accept)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/opt/stack/cinder/cinder/api/openstack/wsgi.py", line 943, in _process_stack
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
action_result = self.dispatch(meth, request, action_args)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/opt/stack/cinder/cinder/api/openstack/wsgi.py", line 1019, in dispatch
  2014-04-15 

[Yahoo-eng-team] [Bug 1319182] Re: Pausing a rescued instance should be impossible

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1319182

Title:
  Pausing a rescued instance should be impossible

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  In the following commands, 'vmtest' is a freshly created virtual
  machine.

  
  $ nova show vmtest | grep -E "(status|task_state)"
  | OS-EXT-STS:task_state| -
  | status   | ACTIVE

  $ nova rescue vmtest
  +---+--+
  | Property  | Value
  +---+--+
  | adminPass | 2ZxvzZULT4sr
  +---+--+

  $ nova show vmtest | grep -E "(status|task_state)"
  | OS-EXT-STS:task_state| -
  | status   | RESCUE

  $ nova pause vmtest

  $ nova show vmtest | grep -E "(status|task_state)"
  | OS-EXT-STS:task_state| -
  | status   | PAUSED

  $ nova unpause vmtest

  $ nova show vmtest | grep -E "(status|task_state)"
  | OS-EXT-STS:task_state| -
  | status   | ACTIVE

  Here, we would want the vm to be in the 'RESCUE' state, as it was
  before being paused.

  $ nova unrescue vmtest
  ERROR (Conflict): Cannot 'unrescue' while instance is in vm_state active 
(HTTP 409) (Request-ID: req-34b8004d-b072-4328-bbf9-29152bd4c34f)

  The 'unrescue' command fails, which seems to confirm that the VM was
  no longer being rescued.

  
  So, two possibilities:
  1) When unpausing, the vm should go back to 'rescued' state
  2) Rescued vms should not be allowed to be paused, as is indicated by this 
graph: http://docs.openstack.org/developer/nova/devref/vmstates.html

  
  Note that the same issue can be observed with suspend/resume instead of 
pause/unpause, and probably other commands as well.

  WDYT ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1319182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321640] Re: [HyperV]: Config drive is not attached to instance after resized or migrated

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321640

Title:
  [HyperV]: Config drive is not attached to instance after resized or
  migrated

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  If we use config-drive (whether set --config-drive=true in boot
  command or set force_config_drive=always in nova.conf), there is bug
  for config-drive when resize or migrate instances on hyperv.

  You can see from current nova codes:
  
https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L269
  when finished migration, there is no code to attach configdrive.iso or 
configdrive.vhd to the resized instance. compared to boot instance 
(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/vmops.py#L226). 
Although this commit https://review.openstack.org/#/c/55975/ handled coping 
configdrive to resized or migrated instance, there is no code to attach it 
after resized or migrated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327406] Re: The One And Only network is variously visible

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327406

Title:
  The One And Only network is variously visible

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  I am testing with the templates in
  https://review.openstack.org/#/c/97366/

  I can create a stack.  I can use `curl` to hit the webhooks to scale
  up and down the old-style group and to scale down the new-style group;
  those all work.  What fails is hitting the webhook to scale up the
  new-style group.  Here is a typescript showing the failure:

  $ curl -X POST
  
'http://10.10.0.125:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3A39675672862f4bd08505bfe1283773e0%3Astacks%2Ftest4
  %2F3cd6160b-
  
d8c5-48f1-a527-4c7df9205fc3%2Fresources%2FNewScaleUpPolicy?Timestamp=2014-06-06T19%3A45%3A27Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=35678396d987432f87cda8e4c6cdbfb5&SignatureVersion=2&Signature=W3aJQ6SR7O5lLOxLEQndbzNB%2FUhefr1W7qO9zNZ%2BHVs%3D'

  The request processing has failed due to an 
internal error:Remote error: ResourceFailure Error: Nested stack UPDATE failed: 
Error: Resource CREATE failed: NotFound: No Network matching {'label': 
u'private'}. (HTTP 404)
  [u'Traceback (most recent call last):\n', u'  File 
"/opt/stack/heat/heat/engine/service.py", line 61, in wrapped\nreturn 
func(self, ctx, *args, **kwargs)\n', u'  File 
"/opt/stack/heat/heat/engine/service.py", line 911, in resource_signal\n
stack[resource_name].signal(details)\n', u'  File 
"/opt/stack/heat/heat/engine/resource.py", line 879, in signal\nraise 
failure\n', u"ResourceFailure: Error: Nested stack UPDATE failed: Error: 
Resource CREATE failed: NotFound: No Network matching {'label': u'private'}. 
(HTTP 
404)\n"].InternalFailureServer

  The original sin looks like this in the heat engine log:

  2014-06-06 17:39:20.013 28692 DEBUG urllib3.connectionpool 
[req-2391a9ea-46d6-46f0-9a7b-cf999a8697e9 ] "GET 
/v2/39675672862f4bd08505bfe1283773e0/os-networks HTTP/1.1" 200 16 _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:415
  2014-06-06 17:39:20.014 28692 ERROR heat.engine.resource 
[req-2391a9ea-46d6-46f0-9a7b-cf999a8697e9 None] CREATE : Server "my_instance" 
Stack "test1-new_style-qidqbd5nrk44-43e7l57kqf5w-4t3xdjrfrr7s" 
[20523269-0ebb-45b8-ad59-75f55607f3bd]
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource Traceback (most 
recent call last):
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 383, in _do_action
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource handle())
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resources/server.py", line 493, in handle_create
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource nics = 
self._build_nics(self.properties.get(self.NETWORKS))
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resources/server.py", line 597, in _build_nics
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource network = 
self.nova().networks.find(label=label_or_uuid)
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
"/opt/stack/python-novaclient/novaclient/base.py", line 194, in find
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource raise 
exceptions.NotFound(msg)
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource NotFound: No Network 
matching {'label': u'private'}. (HTTP 404)

  Private debug logging reveals that in the scale-up case, the call to
  "GET /v2/{tenant-id}/os-networks HTTP/1.1" returns with response code
  200 and an empty list of networks.  Comparing with the corresponding
  call when the stack is being created shows no difference in the calls
  --- because the normal logging omits the headers --- even though the
  results differ (when the stack is being created, the result contains
  the correct list of networks).  Turning on HTTP debug logging in the
  client reveals that the X-Auth-Token headers differ.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326183] Re: detach interface fails as instance info cache is corrupted

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326183

Title:
  detach interface fails as instance info cache is corrupted

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  
  Performing attach/detach interface on a VM sometimes results in an interface 
that can't be detached from the VM.
  I could triage it to the corrupted instance cache info due to non-atomic 
update of that information.
  Details on how to reproduce the bug are as follows. Since this is due to a 
race condition, the test can take quite a bit of time before it hits the bug.

  Steps to reproduce:

  1) Devstack with trunk with the following local.conf:
  disable_service n-net
  enable_service q-svc
  enable_service q-agt
  enable_service q-dhcp
  enable_service q-l3
  enable_service q-meta
  enable_service q-metering
  RECLONE=yes
  # and other options as set in the trunk's local

  2) Create few networks:
  $> neutron net-create testnet1
  $> neutron net-create testnet2
  $> neutron net-create testnet3
  $> neutron subnet-create testnet1 192.168.1.0/24
  $> neutron subnet-create testnet2 192.168.2.0/24
  $> neutron subnet-create testnet3 192.168.3.0/24

  2) Create a testvm in testnet1:
  $> nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64-uec --nic 
net-id=`neutron net-list | grep testnet1 | cut -f 2 -d ' '` testvm

  3) Run the following shell script to attach and detach interfaces for this vm 
in the remaining two networks in a loop until we run into the issue at hand:
  
  #! /bin/bash
  c=1
  netid1=`neutron net-list | grep testnet2 | cut -f 2 -d ' '`
  netid2=`neutron net-list | grep testnet3 | cut -f 2 -d ' '`
  while [ $c -gt 0 ]
  do
 echo "Round: " $c
 echo -n "Attaching two interfaces... "
 nova interface-attach --net-id $netid1 testvm
 nova interface-attach --net-id $netid2 testvm
 echo "Done"
 echo "Sleeping until both those show up in interfaces"
 waittime=0
 while [ $waittime -lt 60 ]
 do
 count=`nova interface-list testvm | wc -l`
 if [ $count -eq 7 ]
 then
 break
 fi
 sleep 2
 (( waittime+=2 ))
 done
 echo "Waited for " $waittime " seconds"
 echo "Detaching both... "
 nova interface-list testvm | grep $netid1 | awk '{print "deleting ",$4; 
system("nova interface-detach testvm "$4 " ; sleep 2");}'
 nova interface-list testvm | grep $netid2 | awk '{print "deleting ",$4; 
system("nova interface-detach testvm "$4 " ; sleep 2");}'
 echo "Done; check interfaces are gone in a minute."
 waittime=0
 while [ $waittime -lt 60 ]
 do
 count=`nova interface-list testvm | wc -l`
 echo "line count: " $count
 if [ $count -eq 5 ]
 then
 break
 fi
 sleep 2
 (( waittime+=2 ))
 done
 if [ $waittime -ge 60 ]
 then
echo "bad case"
exit 1
 fi
 echo "Interfaces are gone"
 ((  c-- ))
  done
  -

  Eventually the test will stop with a failure ("bad case") and the
  interface remaining either from testnet2 or testnet3 can not be
  detached at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329764] Re: Hyper-V volume attach issue: wrong SCSI slot is selected

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329764

Title:
  Hyper-V volume attach issue: wrong SCSI slot is selected

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  When attaching volumes, the Hyper-V driver selects the slot on the
  SCSI controller by using the number of drives attached to that
  controller.

  This leads to exceptions when detaching volumes having lower numbered
  slots and then attaching a new volume.

  Take for example 2 volumes attached which will have 0 and 1 as
  controller addresses. If the first one gets detached, the next time
  we'll try to attach a volume the controller address 1 will be used (as
  it's the number of drives attached to the controller at that time) but
  that slot is actually uesd, so it will raise an exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337860] Re: VirtualInterfaceCreateException: Virtual Interface creation failed

2014-09-29 Thread Adam Gandelman
*** This bug is a duplicate of bug 1292243 ***
https://bugs.launchpad.net/bugs/1292243

** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337860

Title:
  VirtualInterfaceCreateException: Virtual Interface creation failed

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  After failing to launch 100 instances on size/memory allocation issue I tried 
launching  100 smaller instances. 
  after the first failure, 2 attempts to launch 100 smaller instances come back 
with error on some of the instances. 
  the 3ed time, I suddenly succeeded to launch all instances with no errors. 
  this is reproduced 100%. 

  to reproduce: 
  make sure you have enough computes to run 100 tiny flavor instances. 

  1. launch 100 instances with largest flavor (you should fail on memory or 
size). 
  2. destroy all instances and run 100 tiny flavor instances - repeat this step 
until all instances are launched successfully. 

  some of the instances will fail to be created with the below error
  after the first failure, even though they should be capable of
  running. after several trials we suddenly manage to run all instances
  (so cache issue perhaps).

  
  2014-07-04 14:46:19.728 15291 DEBUG nova.compute.utils 
[req-327fecfb-3bac-4a6d-aebe-3e06c03132e1 5a67ce69c6824e17b44bf15003ccc29f 
d22192179d3042a587ebd06bd6fd48d1] [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] Virtual Interface creat
  ion failed notify_about_instance_usage 
/usr/lib/python2.7/site-packages/nova/compute/utils.py:336
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] Traceback (most recent call last):
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1191, in 
_run_instance
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] instance, image_meta, 
legacy_bdm_in_spec)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1335, in 
_build_instance
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] network_info.wait(do_raise=False)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] six.reraise(self.type_, self.value, 
self.tb)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1311, in 
_build_instance
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] set_access_ip=set_access_ip)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 399, in 
decorated_function
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] return function(self, context, *args, 
**kwargs)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1723, in _spawn
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] six.reraise(self.type_, self.value, 
self.tb)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1720, in _spawn
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] block_device_info)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2260, in 
spawn
  2014-07-04 14:46:19.728 15291 TRA

[Yahoo-eng-team] [Bug 1338451] Re: shelve api does not work in the nova-cell environment

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338451

Title:
  shelve api does not work in the nova-cell environment

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  If you run nova shelve api in nova-cell environment It throws
  following error:

  Nova cell (n-cell-child) Logs:

  2014-07-06 23:57:13.445 ERROR nova.cells.messaging 
[req-a689a1a1-4634-4634-974a-7343b5554f46 admin admin] Error processing message 
locally: save() got an unexpected keyword argument 'expected_task_state'
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging Traceback (most recent 
call last):
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 200, in _process_locally
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 1287, in 
_process_message_locally
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return fn(message, 
**message.method_kwargs)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 700, in run_compute_api_method
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return 
fn(message.ctxt, *args, **method_info['method_kwargs'])
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 192, in wrapped
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return func(self, 
context, target, *args, **kwargs)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 182, in inner
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return function(self, 
context, instance, *args, **kwargs)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 163, in inner
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return f(self, 
context, instance, *args, **kw)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 2458, in shelve
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging 
instance.save(expected_task_state=[None])
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging TypeError: save() got an 
unexpected keyword argument 'expected_task_state'
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging

  Nova compute log:

  2014-07-07 00:05:19.084 ERROR oslo.messaging.rpc.dispatcher 
[req-9539189d-239b-4e74-8aea-8076740
  31c2f admin admin] Exception during message handling: 'NoneType' object is 
not iterable
  Traceback (most recent call last):

    File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _
  dispatch_and_reply
  incoming.message))

    File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _
  dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)

    File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _
  do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)

    File "/opt/stack/nova/nova/conductor/manager.py", line 351, in 
notify_usage_exists
  system_metadata, extra_usage_info)

    File "/opt/stack/nova/nova/compute/utils.py", line 250, in 
notify_usage_exists
  ignore_missing_network_data)

    File "/opt/stack/nova/nova/notifications.py", line 285, in bandwidth_usage
  macs = [vif['address'] for vif in nw_info]

  TypeError: 'NoneType' object is not iterable

  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dis
  t-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped
  2014-07-07 00:05:19.084 TRACE o

[Yahoo-eng-team] [Bug 1347777] Re: The compute_driver option description does not include the Hyper-V driver

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/134

Title:
  The compute_driver option description does not include the Hyper-V
  driver

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  The description of the option "compute_driver" should include
  hyperv.HyperVDriver along with the other supported drivers

  
https://github.com/openstack/nova/blob/aa018a718654b5f868c1226a6db7630751613d92/nova/virt/driver.py#L35-L38

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348720] Re: Missing index for expire_reservations

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348720

Title:
  Missing index for expire_reservations

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  While investigating some database performance problems, we discovered
  that there is no index on deleted for the reservations table. When
  this table gets large, the expire_reservations code will do a full
  table scan and take multiple seconds to complete. Because the expire
  runs on a periodic, it can slow down the master database significantly
  and cause nova or cinder to become extremely slow.

  > EXPLAIN UPDATE reservations SET updated_at=updated_at, 
deleted_at='2014-07-24 22:26:17', deleted=id WHERE reservations.deleted = 0 AND 
reservations.expire < '2014-07-24 22:26:11';
  
++-+--+---+---+-+-+--++--+
  | id | select_type | table| type  | possible_keys | key| key_len 
| ref  | rows  | Extra|
  
++-+--+---+---+-+-+--++--+
  |  1 | SIMPLE  | reservations | index | NULL  | PRIMARY | 4  
| NULL | 950366 | Using where; Using temporary |
  
++-+--+---+---+-+-+--++--+

  An index on (deleted, expire) would be the most efficient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1344036] Re: Hyper-V agent generates exception when force_hyperv_utils_v1 is True on Windows Server / Hyper-V Server 2012 R2

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided => Medium

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1344036

Title:
  Hyper-V agent generates exception when force_hyperv_utils_v1 is True
  on Windows Server / Hyper-V Server 2012 R2

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  WMI root\virtualization namespace v1 (in Hyper-V) has been removed from 
Windows Server / Hyper-V Server 2012 R2, according to:
  http://technet.microsoft.com/en-us/library/dn303411.aspx

  Because of this, setting the force_hyperv_utils_v1 option on the
  Windows Server 2012 R2 nova compute agent's nova.conf will cause
  exceptions, since it will try to use the removed root\virtualization
  namespace v1.

  Logs:
  http://paste.openstack.org/show/87125/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1344036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350466] Re: deadlock in scheduler expire reservation periodic task

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350466

Title:
  deadlock in scheduler expire reservation periodic task

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  http://logs.openstack.org/54/105554/4/check/gate-tempest-dsvm-neutron-
  large-
  ops/45501af/logs/screen-n-sch.txt.gz?level=TRACE#_2014-07-30_16_26_20_158

  
  2014-07-30 16:26:20.158 17209 ERROR nova.openstack.common.periodic_task [-] 
Error during SchedulerManager._expire_reservations: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'UPDATE 
reservations SET updated_at=updated_at, deleted_at=%s, deleted=id WHERE 
reservations.deleted = %s AND reservations.expire < %s' 
(datetime.datetime(2014, 7, 30, 16, 26, 20, 152098), 0, datetime.datetime(2014, 
7, 30, 16, 26, 20, 149665))
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/periodic_task.py", line 198, in 
run_periodic_tasks
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/scheduler/manager.py", line 157, in 
_expire_reservations
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
QUOTAS.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/quota.py", line 1401, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._driver.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/quota.py", line 651, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
db.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/db/api.py", line 1173, in reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return IMPL.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 149, in wrapper
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 3394, in 
reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
reservation_query.soft_delete(synchronize_session=False)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py", line 
694, in soft_delete
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
synchronize_session=synchronize_session)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2690, in 
update
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_op.exec_()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
816, in exec_
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._do_exec()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
913, in _do_exec
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_stmt, params=self.query._params)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py", line 
444, in _wrap
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
_raise_if_deadlock_error(e, self.bind.dialect.name)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py", line 
427, in _raise_if_deadlock_error
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
raise exception.DBDeadlock(operational_error)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
DBDeadlock: (OperationalError) (1213, 'D

[Yahoo-eng-team] [Bug 1354448] Re: The Hyper-V driver should raise a InstanceFaultRollback in case of resize down requests

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1354448

Title:
  The Hyper-V driver should raise a InstanceFaultRollback in case of
  resize down requests

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  The Hyper-V driver does not support resize down and is currently
  rising an exception if the user attempts to do that, causing the
  instance to go in ERROR state.

  The driver should use the recently introduced instance faults
  "exception.InstanceFaultRollback" instead, which will leave the
  instance in ACTIVE state as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1354448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353697] Re: Hyper-V agent raises UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this endpoint.

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => High

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1353697

Title:
  Hyper-V agent raises UnsupportedRpcVersion: Specified RPC version,
  1.1, not supported by this endpoint.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The Hyper-V agent raises:

  2014-08-06 10:42:37.096 2052 ERROR neutron.openstack.common.rpc.amqp 
[req-46340a1a-9143-45c9-b645-2612d41f20a6 None] Exception during message 
handling
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\neutron\openstack\common\rpc\amqp.py",
 line 462, in _process_data
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp   File 
"C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\neutron\openstack\common\rpc\dispatcher.py",
 line 178, in dispatch
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp 
raise rpc_common.UnsupportedRpcVersion(version=version)
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp 
UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this 
endpoint.
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp 

  The issue does not affect functionality, but it creates a lot of noise
  in the logs since the error is logged at each iteration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1353697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352428] Re: HyperV "Shutting Down" state is not mapped

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352428

Title:
  HyperV "Shutting Down" state is not mapped

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  The method which gets VM related information can fail if the VM is in an 
intermediary state such as "Shutting down".
  The reason is that some of the Hyper-V specific vm states are not defined as 
possible states.

  This will result into a key error as shown bellow:

  http://paste.openstack.org/show/90015/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357063] Re: nova.virt.driver "Emitting event" log message in stable/icehouse doesn't show anything

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357063

Title:
  nova.virt.driver "Emitting event" log message in stable/icehouse
  doesn't show anything

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  This is fixed on master with commit
  8c98b601f2db1f078d5f42ab94043d9939608f73 but is useless on
  stable/icehouse, here is an example snip from a stable/icehouse
  tempest run of what this looks like in the n-cpu log:

  2014-08-14 16:18:53.311 473 DEBUG nova.virt.driver [-] Emitting event
  emit_event /opt/stack/new/nova/nova/virt/driver.py:1207

  It would be really nice to use that information in trying to debug
  what's causing all of these hits for InstanceInfoCacheNotFound stack
  traces:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRXhjZXB0aW9uIGRpc3BhdGNoaW5nIGV2ZW50XCIgQU5EIG1lc3NhZ2U6XCJJbmZvIGNhY2hlIGZvciBpbnN0YW5jZVwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIgQU5EIE5PVCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwODA0NzMxMzM5Nn0=

  We should backport that repr fix to stable/icehouse for serviceability
  purposes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357102] Re: Big Switch: Multiple read calls to consistency DB fails

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357102

Title:
  Big Switch: Multiple read calls to consistency DB fails

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The Big Switch consistency DB throws an exception if read_for_update() is 
called multiple times without closing the transaction in between. This was 
originally because there was a DB lock in place and a single thread could 
deadlock if it tried twice. However, 
  there is no longer a point to this protection because the DB lock is gone and 
certain response failures result in the DB being read twice (the second time 
for a retry).

  2014-08-14 21:56:41.496 12939 ERROR neutron.plugins.ml2.managers 
[req-ee311173-b38a-481e-8900-d963c676b05f None] Mechanism driver 'bigswitch' 
failed in update_port_postcommit
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 168, 
in _call_on_drivers
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_bigswitch/driver.py",
 line 91, in update_port_postcommit
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
port["network"]["id"], port)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py", 
line 555, in rest_update_port
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
self.rest_create_port(tenant_id, net_id, port)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py", 
line 545, in rest_create_port
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
self.rest_action('PUT', resource, data, errstr)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py", 
line 476, in rest_action
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers timeout)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py", line 
249, in inner
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers return 
f(*args, **kwargs)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py", 
line 423, in rest_call
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
hash_handler=hash_handler)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py", 
line 139, in rest_call
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
headers[HASH_MATCH_HEADER] = hash_handler.read_for_update()
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/db/consistency_db.py",
 line 56, in read_for_update
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers raise 
MultipleReadForUpdateCalls()
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
MultipleReadForUpdateCalls: Only one read_for_update call may be made at a time.
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357578] Re: Unit test: nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm timing out in gate

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357578

Title:
  Unit test:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
  timing out in gate

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  http://logs.openstack.org/62/114062/3/gate/gate-nova-
  python27/2536ea4/console.html

   FAIL:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm

  014-08-15 13:46:09.155 | INFO [nova.tests.integrated.api.client] Doing GET on 
/v2/openstack//flavors/detail
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
sent launcher_process pid: 10564 signal: 15
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
waiting on process 10566 to exit
  2014-08-15 13:46:09.155 | INFO [nova.wsgi] Stopping WSGI server.
  2014-08-15 13:46:09.155 | }}}
  2014-08-15 13:46:09.156 | 
  2014-08-15 13:46:09.156 | Traceback (most recent call last):
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 206, in 
test_terminate_sigterm
  2014-08-15 13:46:09.156 | self._terminate_with_signal(signal.SIGTERM)
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 194, in 
_terminate_with_signal
  2014-08-15 13:46:09.156 | self.wait_on_process_until_end(pid)
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 146, in 
wait_on_process_until_end
  2014-08-15 13:46:09.157 | time.sleep(0.1)
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 31, in sleep
  2014-08-15 13:46:09.157 | hub.switch()
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 287, in switch
  2014-08-15 13:46:09.157 | return self.greenlet.switch()
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 339, in run
  2014-08-15 13:46:09.158 | self.wait(sleep_time)
  2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 82, in wait
  2014-08-15 13:46:09.158 | sleep(seconds)
  2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
  2014-08-15 13:46:09.158 | raise TimeoutException()
  2014-08-15 13:46:09.158 | TimeoutException

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362221] Re: VMs fail to start when Ceph is used as a backend for ephemeral drives

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362221

Title:
  VMs fail to start when Ceph is used as a backend for ephemeral drives

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  VMs' drives placement in Ceph option has been chosen
  (libvirt.images_types == 'rbd').

  When user creates a flavor and specifies:
 - root drive size >0
 - ephemeral drive size >0 (important)

  and tries to boot a VM, he gets "no valid host was found" in the
  scheduler log:

  Error from last host: node-3.int.host.com (node node-3.int.host.com): 
[u'Traceback (most recent call last):\n', u'
   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1305, 
in _build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/l
  ib/python2.6/site-packages/nova/compute/manager.py", line 393, in 
decorated_function\n return function(self, context, *args, **kwargs)\n', u' File
   "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1717, in 
_spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instanc
  e)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\n six.reraise(self.type_, self.value, se
  lf.tb)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1714, in 
_spawn\n block_device_info)\n', u' File "/usr/lib/py
  thon2.6/site-packages/nova/virt/libvirt/driver.py", line 2259, in spawn\n 
admin_pass=admin_password)\n', u' File "/usr/lib/python2.6/site-packages
  /nova/virt/libvirt/driver.py", line 2648, in _create_image\n 
ephemeral_size=ephemeral_gb)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/virt/
  libvirt/imagebackend.py", line 186, in cache\n *args, **kwargs)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py",
  line 587, in create_image\n prepare_template(target=base, max_size=size, 
*args, **kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/opens
  tack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', 
u' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebac
  kend.py", line 176, in fetch_func_sync\n fetch_func(target=target, *args, 
**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/virt/libvir
  t/driver.py", line 2458, in _create_ephemeral\n disk.mkfs(os_type, fs_label, 
target, run_as_root=is_block_dev)\n', u' File "/usr/lib/python2.6/sit
  e-packages/nova/virt/disk/api.py", line 117, in mkfs\n utils.mkfs(default_fs, 
target, fs_label, run_as_root=run_as_root)\n', u' File "/usr/lib/pyt
  hon2.6/site-packages/nova/utils.py", line 856, in mkfs\n execute(*args, 
run_as_root=run_as_root)\n', u' File "/usr/lib/python2.6/site-packages/nov
  a/utils.py", line 165, in execute\n return processutils.execute(*cmd, 
**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/commo
  n/processutils.py", line 193, in execute\n cmd=\' \'.join(cmd))\n', 
u"ProcessExecutionError: Unexpected error while running command.\nCommand: sudo
   nova-rootwrap /etc/nova/rootwrap.conf mkfs -t ext3 -F -L ephemeral0 
/var/lib/nova/instances/_base/ephemeral_1_default\nExit code: 1\nStdout: 
''\nStde
  rr: 'mke2fs 1.41.12 (17-May-2010)\\nmkfs.ext3: No such file or directory 
while trying to determine filesystem size\\n'\n"]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357476] Re: Timeout waiting for vif plugging callback for instance

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357476

Title:
  Timeout waiting for vif plugging callback for instance

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  n-cpu times out while waiting for neutron.

  
  Logstash
  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiIgbWVzc2FnZTogXCJUaW1lb3V0IHdhaXRpbmcgZm9yIHZpZiBwbHVnZ2luZyBjYWxsYmFjayBmb3IgaW5zdGFuY2VcIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwODEyMjI1NjY2NiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  message: "Timeout waiting for vif plugging callback for instance" AND
  tags:"screen-n-cpu.txt"

  
  Logs
  
  
http://logs.openstack.org/09/108909/4/gate/check-tempest-dsvm-neutron-full/628138b/logs/screen-n-cpu.txt.gz#_2014-08-13_21_14_53_453

  2014-08-13 21:14:53.453 WARNING nova.virt.libvirt.driver [req-
  0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250
  ServerActionsTestXML-1011304525] Timeout waiting for vif plugging
  callback for instance 794ceb8c-a08b-4b02-bdcb-4ad5632f7744

  2014-08-13 21:14:55.408 ERROR nova.compute.manager 
[req-0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250 
ServerActionsTestXML-1011304525] [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] Setting instance vm_state to ERROR
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] Traceback (most recent call last):
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3714, in finish_resize
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] disk_info, image)
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3682, in _finish_resize
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] old_instance_type, sys_meta)
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] six.reraise(self.type_, self.value, 
self.tb)
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3677, in _finish_resize
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] block_device_info, power_on)
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5302, in 
finish_migration
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] block_device_info, power_on)
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3792, in 
_create_domain_and_network
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] raise 
exception.VirtualInterfaceCreateException()
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] VirtualInterfaceCreateException: Virtual 
Interface creation failed
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] 

  2014-08-13 21:14:56.138 ERROR oslo.messaging.rpc.dispatcher 
[req-0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250 
ServerActionsTestXML-1011304525] Exception during message handling: Virtual 
Interface creation failed
  2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch

[Yahoo-eng-team] [Bug 1357599] Re: race condition with neutron in nova migrate code

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357599

Title:
  race condition with neutron in nova migrate code

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  The tempest test that does a resize on the instance from time to time
  fails with a neutron virtual interface timeout error. The reason why
  this is occurring is because resize_instance() calls:

  disk_info = self.driver.migrate_disk_and_power_off(
  context, instance, migration.dest_host,
  instance_type, network_info,
  block_device_info)

  which calls destory() which unplugs the vifs(). Then,

  self.driver.finish_migration(context, migration, instance,
   disk_info,
   network_info,
   image, resize_instance,
   block_device_info, power_on)

  is called which expects a vif_plugged event. Since this happens on the
  same host the neutron agent is able to detect that the vif was
  unplugged then plugged because it happens so fast.  To fix this we
  should check if we are migrating to the same host if we are we should
  not expect to get an event.

  8d1] Setting instance vm_state to ERROR
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] Traceback (most recent call last):
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3714, in finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] disk_info, image)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3682, in _finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] old_instance_type, sys_meta)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] six.reraise(self.type_, self.value, 
self.tb)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3677, in _finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5302, in 
finish_migration
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3792, in 
_create_domain_and_network
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] raise 
exception.VirtualInterfaceCreateException()
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] VirtualInterfaceCreateException: Virtual 
Interface creation failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360817] Re: Hyper-V agent fails on Hyper-V 2008 R2 due to missing "remove_all_security_rules" method

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Medium

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360817

Title:
  Hyper-V agent fails on Hyper-V 2008 R2 due to missing
  "remove_all_security_rules" method

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  A recent regression does not allow the Hyper-V agent to run
  successfully on Hyper-V 2008 R2, which is currently still a supported
  platform.

  The call generating the error is:

  
https://github.com/openstack/neutron/blob/771327adbe9e563506f98ca561de9ded4d987698/neutron/plugins/hyperv/agent/hyperv_neutron_agent.py#L392

  Error stack trace:

  http://paste.openstack.org/show/98471/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358881] Re: jjsonschema 2.3.0 -> 2.4.0 upgrade breaking nova.tests.test_api_validation tests

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358881

Title:
  jjsonschema 2.3.0 -> 2.4.0 upgrade breaking
  nova.tests.test_api_validation tests

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  the following two failures appeared after upgrading jsonschema to
  2.4.0.  downgrading to 2.3.0 returned the tests to passing.

  ==
  FAIL: 
nova.tests.test_api_validation.TcpUdpPortTestCase.test_validate_tcp_udp_port_fails
  --
  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/home/dev/Desktop/nova-test/nova/tests/test_api_validation.py", line 
602, in test_validate_tcp_udp_port_fails
  expected_detail=detail)
File "/home/dev/Desktop/nova-test/nova/tests/test_api_validation.py", line 
31, in check_validation_error
  self.assertEqual(ex.kwargs, expected_kwargs)
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 321, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 406, in assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'code': 400,
   'detail': u'Invalid input for field/attribute foo. Value: 65536. 65536 is 
greater than the maximum of 65535'}
  actual= {'code': 400,
   'detail': 'Invalid input for field/attribute foo. Value: 65536. 65536.0 is 
greater than the maximum of 65535'}

  
  ==
  FAIL: 
nova.tests.test_api_validation.IntegerRangeTestCase.test_validate_integer_range_fails
  --
  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  INFO [migrate.versioning.api] 215 -> 216... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 216 -> 217... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 217 -> 218... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 218 -> 219... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 219 -> 220... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 220 -> 221... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 221 -> 222... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 222 -> 223... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 223 -> 224... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 224 -> 225... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 225 -> 226... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 226 -> 227... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 227 -> 228... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 228 -> 229... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 229 -> 230... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 230 -> 231... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 231 -> 232... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 232 -> 233... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 233 -> 234... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 234 -> 235... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 235 -> 236... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 236 -> 237... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 237 -> 238... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 238 -> 239... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 239 -> 240... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 240 -> 241... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 241 -> 242... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 242 -> 243... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 243 -> 244... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 244 -> 245... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 245 -> 246... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 246 -> 247... 
  INFO [migrate.versioni

[Yahoo-eng-team] [Bug 1365352] Re: metadata agent does not cache auth info

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => High

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365352

Title:
  metadata agent does not cache auth info

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  metadata agent had tried to cache auth info by the means of
  "self.auth_info = qclient.get_auth_info()" in
  _get_instance_and_tenant_id(), however this qclient is not the exact
  one which would be used in inner methods,  In short, metadata agent
  does not implement auth info caching correctly but still retrieves new
  token from keystone every time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365352/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357972] Re: boot from volume fails on Hyper-V if boot device is not vda

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided => Medium

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357972

Title:
  boot from volume fails on Hyper-V if boot device is not vda

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  The Tempest test
  
"tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern"
  fails on Hyper-V.

  The cause is related to the fact that the root device is "sda" and not
  "vda".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358719] Re: Live migration fails as get_instance_disk_info is not present in the compute driver base class

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided => Medium

** Changed in: nova/icehouse
   Status: New => Fix Committed

** Changed in: nova/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358719

Title:
  Live migration fails as get_instance_disk_info is not present in the
  compute driver base class

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  The "get_instance_disk_info" driver has been added to the libvirt
  compute driver in the following commit:

  
https://github.com/openstack/nova/commit/e4974769743d5967626c1f0415113683411a03a4

  This caused regression failures on drivers that do not implement it,
  e.g.:

  http://paste.openstack.org/show/97258/

  The method has been subsequently added to the base class which, but
  raising a NotImplementedError(), which still causes the regression:

  
https://github.com/openstack/nova/commit/2bed16c89356554a193a111d268a9587709ed2f7

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360394] Re: NSX: log request body to NSX as debug

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360394

Title:
  NSX: log request body to NSX as debug

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  Previously we never logged the request body that we sent to NSX. This makes
  things hard to debug when issues arise as we don't actually log the body of
  the request that we made. This patch adds the body to our issue request log
  statement.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366917] Re: neutron should not use neutronclients utils methods

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Low

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366917

Title:
  neutron should not use neutronclients utils methods

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  2014-09-07 19:17:58.331 | Traceback (most recent call last):
  2014-09-07 19:17:58.331 |   File "/usr/local/bin/neutron-debug", line 6, in 
  2014-09-07 19:17:58.332 | from neutron.debug.shell import main
  2014-09-07 19:17:58.332 |   File 
"/opt/stack/new/neutron/neutron/debug/shell.py", line 29, in 
  2014-09-07 19:17:58.332 | 'probe-create': utils.import_class(
  2014-09-07 19:17:58.332 | AttributeError: 'module' object has no attribute 
'import_class'
  2014-09-07 19:17:58.375 | + exit_trap
  2014-09-07 19:17:58.375 | + local r=1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1366917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366921] Re: NSX: create_port should return empty list instead of null for allowed-address-pair

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => High

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366921

Title:
  NSX: create_port should return empty list instead of null for allowed-
  address-pair

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  
  ft133.5: 
tempest.api.network.test_ports.PortsTestJSON.test_show_port[gate,smoke]_StringException:
 pythonlogging:'': {{{2014-09-07 18:53:43,165 17979 INFO 
[tempest.common.rest_client] Request (PortsTestJSON:test_show_port): 200 GET 
http://localhost:9696/v2.0/ports/2827a27a-dee1-4013-b90f-cf2aeeae5f4f 0.030s}}}

  Traceback (most recent call last):
File "tempest/api/network/test_ports.py", line 81, in test_show_port
  (port, excluded_keys=['extra_dhcp_opts']))
File 
"/opt/stack/tempest/.tox/smoke-serial/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 433, in assertThat
  raise mismatch_error
  MismatchError: Only in actual:
{'binding:vnic_type': normal}
  Differences:
allowed_address_pairs: expected [], actual None

  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  pythonlogging:'': {{{2014-09-07 18:53:43,165 17979 INFO
  [tempest.common.rest_client] Request (PortsTestJSON:test_show_port):
  200 GET http://localhost:9696/v2.0/ports/2827a27a-dee1-4013-b90f-
  cf2aeeae5f4f 0.030s}}}

  Traceback (most recent call last):
File "tempest/api/network/test_ports.py", line 81, in test_show_port
  (port, excluded_keys=['extra_dhcp_opts']))
File 
"/opt/stack/tempest/.tox/smoke-serial/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 433, in assertThat
  raise mismatch_error
  MismatchError: Only in actual:
{'binding:vnic_type': normal}
  Differences:
allowed_address_pairs: expected [], actual None

  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  pythonlogging:'': {{{2014-09-07 18:53:43,165 17979 INFO
  [tempest.common.rest_client] Request (PortsTestJSON:test_show_port):
  200 GET http://localhost:9696/v2.0/ports/2827a27a-dee1-4013-b90f-
  cf2aeeae5f4f 0.030s}}}

  Traceback (most recent call last):
File "tempest/api/network/test_ports.py", line 81, in test_show_port
  (port, excluded_keys=['extra_dhcp_opts']))
File 
"/opt/stack/tempest/.tox/smoke-serial/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 433, in assertThat
  raise mismatch_error
  MismatchError: Only in actual:
{'binding:vnic_type': normal}
  Differences:
allowed_address_pairs: expected [], actual None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1366921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293184] Re: Can't clear shared flag of unused network

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293184

Title:
  Can't clear shared flag of unused network

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  A network marked as external can be used as a gateway for tenant routers, 
even though it's not necessarily marked as shared.
  If the 'shared' attribute is changed from True to False for such a network 
you get an error:
  Unable to reconfigure sharing settings for network sharetest. Multiple 
tenants are using it

  This is clearly not the intention of the 'shared' field, so if there
  are only service ports on the network there is no reason to block
  changing it from shared to not shared.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302611] Re: policy.init called too many time for each API request

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302611

Title:
  policy.init called too many time for each API request

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  policy.init() checks whether the rule cache is populated and valid,
  and if not reloads the policy cache from the policy.json file.

  As the current code runs init() each time a policy is checked or enforced, 
list operations will call init() several times (*)
  If policy.json is updated while a response is being generated, this will lead 
to a situation where some item are processed according to the old policies, and 
other according to the new ones, which would be wrong.

  Also, init() checks the last update time of the policy file, and
  repeating this check multiple time is wasteful.

  A simple solution would be to explicitly call policy.init from
  api.v2.base.Controller in order to ensure the method is called only
  once per API request.


  (*) a  GET /ports operation returning 1600 ports calls policy.init()
  9606 times

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286412] Re: Add support for router and network scheduling in Cisco N1kv Plugin.

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1286412

Title:
  Add support for router and network scheduling in Cisco N1kv Plugin.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Added functionality to schedule routers and networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1286412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311758] Re: OpenDaylight ML2 Mechanism Driver does not handle authentication errors

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Medium

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1311758

Title:
  OpenDaylight ML2 Mechanism Driver does not handle authentication
  errors

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  This behaviour was noticed when troubleshooting a misconfiguration.
  Authentication with ODL was failing and the exception was being ignored.

  In the "sync_resources" method of the ODL Mechanism Driver, HTTPError 
exceptions with a status code of 404 are handled but the exception is not 
re-raised if the status code is not 404. 
  It is preferable to re-raise this exception.

  In addition it would be helpful if the "obtain_auth_cookies" should
  throw a more specific exception than HTTPError when authentication
  with the ODL controller fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1311758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316618] Re: add host to security group broken

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Low

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1316618

Title:
  add host to security group broken

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  I am running nova/neutron forked from trunk around 12/30/2013. Neutron
  is configured with openvswitch plugin and security group enabled.

  How to reproduce the issue: create a security group SG1; add a rule to
  allow ingress from SG1 group to port 5000; add host A, B, and C to SG1
  in order.

  It seems that A can talk to B and C over port 5000, B can talk to C,
  but C can talk to neither of A and B. I confirmed that the iptables
  rules are incorrect for A and B. It seems to me that when A is added
  to the group, nothing changed since no other group member exists. When
  B and C were added to the group, A's ingress iptables rules were never
  updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1316618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317094] Re: neutron requires list amqplib dependency

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1317094

Title:
  neutron requires list amqplib dependency

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Neutron does not use amqplib directly (only via oslo.messaging or
  kombu). kombu already depends on either amqp or amqplib, so the extra
  dep is not necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1317094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325184] Re: add unit tests for the ODL MechanismDriver

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Medium

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1325184

Title:
  add unit tests for the ODL MechanismDriver

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  All the operations (create, update or delete) haven't been covered by unit 
tests.
  Bug #1324450 about the delete operations would have been caught.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1325184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328181] Re: NSX: remove_router_interface might fail because of NAT rule mismatch

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Medium

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328181

Title:
  NSX: remove_router_interface might fail because of NAT rule mismatch

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The remove_router_interface for the VMware NSX plugin expects a precise 
number of SNAT rules for a subnet.
  If the actual number of NAT rules differs from the expected one, an exception 
is raised.

  The reasons for this might be:
  - earlier failure in remove_router_interface
  - NSX API client tampering with NSX objects
  - etc.

  In any case, the remove_router_interface operation should succeed
  removing every match for the NAT rule to delete from the NSX logical
  router.

  sample traceback: http://paste.openstack.org/show/83427/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330490] Re: can't create security group rule by ip protocol when using postgresql

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330490

Title:
  can't create security group rule by ip protocol when using postgresql

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  when i try to create a rule in sec group using ip protocol number it
  fails if the db in use is postgresql

  i can repeat the problem in havana, icehouse and master

  2014-06-16 08:41:07.009 15134 ERROR neutron.api.v2.resource 
[req-3d2d03a3-2d8a-4ad0-b41d-098aecd5ecb8 None] create failed
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 419, in create
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_rpc_base.py", line 
43, in create_security_group_rule
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource bulk_rule)[0]
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py", line 266, 
in create_security_group_rule_bulk_native
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource 
self._check_for_duplicate_rules(context, r)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py", line 394, 
in _check_for_duplicate_rules
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource db_rules = 
self.get_security_group_rules(context, filters)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py", line 421, 
in get_security_group_rules
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource 
page_reverse=page_reverse)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line 197, 
in _get_collection
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource items = 
[dict_func(c, fields) for c in query]
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2353, in 
__iter__
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource return 
self._execute_and_instances(context)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2368, in 
_execute_and_instances
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 662, in 
execute
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource params)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 761, in 
_execute_clauseelement
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource compiled_sql, 
distilled_params
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 874, in 
_execute_context
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource context)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1024, in 
_handle_dbapi_exception
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource exc_info
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 196, in 
raise_from_cause
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource 
reraise(type(exception), exception, tb=exc_tb)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 867, in 
_execute_context
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource context)
  2014-0

[Yahoo-eng-team] [Bug 1332713] Re: Cisco: Send network and subnet UUID during subnet create

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Low

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332713

Title:
  Cisco: Send network and subnet UUID during subnet create

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  n1kv client is not sending netSegmentName and id fields to the VSM
  (controller) in create_ip_pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338880] Re: Any user can set a network as external

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => High

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338880

Title:
  Any user can set a network as external

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  Even though the default policy.json restrict the creation of external
  networks to admin_only, any user can update a network as external.

  I could verify this with the following test (PseudoPython):

  project: ProjectA
  user: ProjectMemberA has Member role on project ProjectA.

  with network(name="UpdateNetworkExternalRouter", tenant_id=ProjectA, 
router_external=False) as test_network:
  
self.project_member_a_neutron_client.update_network(network=test_network, 
router_external=True)

  project_member_a_neutron_client encapsulates a python-neutronclient,
  and here it is what the method does.

  def update_network(self, network, name=None, shared=None, 
router_external=None):
  body = {
  'network': {
  }
  }
  if name is not None:
  body['network']['name'] = name
  if shared is not None:
  body['network']['shared'] = shared
  if router_external is not None:
  body['network']['router:external'] = router_external

  self.python_neutronclient.update_network(network=network.id,
  body=body)['network']

  
  The expected behaviour is that the operation should not be allowed, but the 
user without admin privileges is able to perform such change.

  Trying to add an "update_network:router:external": "rule:admin_only"
  policy did not work and broke other operations a regular user should
  be able to do.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336596] Re: Cisco N1k: Clear entries in n1kv specific tables on rollbacks

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1336596

Title:
  Cisco N1k: Clear entries in n1kv specific tables on rollbacks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  During rollback operations, the resource is cleaned up from the neutron 
database but leaves a few stale entries in the n1kv specific tables.
  Vlan/VXLAN allocation tables are inconsistent during network rollbacks.
  VM-Network table is left inconsistent during port rollbacks.
  Explicitly clearing ProfileBinding table entry (during network profile 
rollbacks) is not required as delete_network_profile internally takes care of 
it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1336596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348766] Re: Big Switch: hash shouldn't be updated on unsuccessful calls

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348766

Title:
  Big Switch: hash shouldn't be updated on unsuccessful calls

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  The configuration hash db is updated on every response from the
  backend including errors that contain an empty hash. This is causing
  the hash to be wiped out if a standby controller is contacted first,
  which opens a narrow time window where the backend could become out of
  sync. It should only update the hash on successful REST calls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352893] Re: ipv6 cannot be disabled for ovs agent

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352893

Title:
  ipv6 cannot be disabled for ovs agent

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  If ipv6 module is not loaded in kernel ip6tables command doesn't work
  and fails  in openvswitch-agent when processing ports:

  2014-08-05 15:20:57.089 3944 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Error while processing 
VIF ports
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1262, in rpc_loop
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1090, in process_network_ports
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
247, in setup_port_filters
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
164, in prepare_devices_filter
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.firewall.prepare_port_filter(device)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.gen.next()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/firewall.py", line 108, in 
defer_apply
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.filter_defer_apply_off()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_firewall.py", 
line 370, in filter_defer_apply_off
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.iptables.defer_apply_off()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", 
line 353, in defer_apply_off
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self._apply()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", 
line 369, in _apply
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent return 
self._apply_synchronized()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", 
line 400, in _apply_synchronized
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
root_helper=self.root_helper)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 76, in 
execute
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent raise RuntimeError(m)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent RuntimeError:
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip6tables-restore', '-c']
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Exit code: 2
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stdout: ''
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stderr: "ip6tables-restore 
v1.4.21: ip6tables-restore: unable to

[Yahoo-eng-team] [Bug 1350326] Re: Migration 1fcfc149aca4_agents_unique_by_type_and_host is not applied to ml2 plugin

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Medium

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1350326

Title:
  Migration 1fcfc149aca4_agents_unique_by_type_and_host is not applied
  to ml2 plugin

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  While it's not anymore an issue of the master since now migrations are
  unconditional, it still makes sense to fix the migration and backport
  it to Icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1350326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357105] Re: Big Switch: servermanager should retry on 503 instead of failing immediately

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Low

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357105

Title:
  Big Switch: servermanager should retry on 503 instead of failing
  immediately

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  When the backend controller returns a 503 service unavailable, the big
  switch server manager immediately counts the server request as failed.
  Instead it should retry a few times because a 503 occurs when there
  are locks in place for synchronization during upgrade, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360658] Re: Managing functional job hooks in the infra config repo is error prone

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360658

Title:
  Managing functional job hooks in the infra config repo is error prone

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  The hook scripts that support Neutron's functional gate/check job are
  currently defined in openstack-infra/config (https://github.com
  /openstack-
  
infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config
  /neutron-functional.yaml).  They are proving difficult to maintain
  there due to the inability to verify the scripts' functionality before
  merge.  This combined with an overloaded infra core team suggests
  defining the hook scripts in the neutron tree where the job config can
  call them (this strategy is already employed by other projects like
  solum and tripleo).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357125] Re: Cisco N1kv plugin needs to send subtype on network profile creation

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357125

Title:
  Cisco N1kv plugin needs to send subtype on network profile creation

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Cisco N1kv neutron plugin should send also the subtype for overly
  networks when the a network segment pool is created

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362480] Re: Datacenter moid should be a value not a tuple

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362480

Title:
  Datacenter moid should be a value not a tuple

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  In edge_appliance_driver.py, there is a comma added when setting the
  datacenter moid, so the result is the value datacenter moid is changed
  to the tuple type, that is wrong.

   if datacenter_moid:
  edge['datacenterMoid'] = datacenter_moid,  ===> Should remove the ','
  return edge

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358668] Re: Big Switch: keyerror on filtered get_ports call

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Medium

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358668

Title:
  Big Switch: keyerror on filtered get_ports call

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  If get_ports is called in the Big Switch plugin without 'id' being one
  of the included fields, _extend_port_dict_binding will fail with the
  following error.

  Traceback (most recent call last):
File "neutron/tests/unit/bigswitch/test_restproxy_plugin.py", line 87, in 
test_get_ports_no_id
  context.get_admin_context(), fields=['name'])
File "neutron/plugins/bigswitch/plugin.py", line 715, in get_ports
  self._extend_port_dict_binding(context, port)
File "neutron/plugins/bigswitch/plugin.py", line 361, in 
_extend_port_dict_binding
  hostid = porttracker_db.get_port_hostid(context, port['id'])
  KeyError: 'id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361545] Re: dhcp agent shouldn't spawn metadata-proxy for non-isolated networks

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Low

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361545

Title:
  dhcp agent shouldn't spawn metadata-proxy for non-isolated networks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The "enable_isolated_metadata = True" options tells DHCP agents that for each 
network under its care, a neutron-ns-metadata-proxy process should be spawned, 
regardless if it's isolated or not.
  This is fine for isolated networks (networks with no routers and no default 
gateways), but for networks which are connected to a router (for which the L3 
agent spawns a separate neutron-ns-metadata-proxy which is attached to the 
router's namespace), 2 different metadata proxies are spawned. For these 
networks, the static routes which are pushed to each instance, letting it know 
where to search for the metadata-proxy, is not pushed and the proxy spawned 
from the DHCP agent is left unused.

  The DHCP agent should know if the network it handles is isolated or
  not, and for non-isolated networks, no neutron-ns-metadata-proxy
  processes should spawn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368251] Re: migrate_to_ml2 accessing boolean as int fails on postgresql

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368251

Title:
  migrate_to_ml2 accessing boolean as int fails on postgresql

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The "allocated" variable used in migrate_to_ml2 was defined to be a boolean 
type and in postgresql this type is enforced,
  while in mysql this just maps to tinyint and accepts both numbers and bools.

  Thus the migrate_to_ml2 script breaks on postgresql

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1368251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364696] Re: Big Switch: Request context is missing from backend requests

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => Low

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364696

Title:
  Big Switch: Request context is missing from backend requests

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The request context that comes into Neutron is not included in the
  request to the backend. This makes it difficult to correlate events in
  the debug logs on the backend such as what incoming Neutron request
  resulted in particular REST calls to the backend and if admin
  privileges were used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365961] Re: Dangerous iptables rule generated in case of protocol "any" and source-port/destination-port usage

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided => High

** Changed in: neutron/icehouse
   Status: New => Fix Committed

** Changed in: neutron/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365961

Title:
  Dangerous iptables rule generated in case of protocol "any" and
  source-port/destination-port usage

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  Icehouse 2014.1.2, FWaas using iptables driver

  In order to allow DNS (TCP and UDP) request, the following rule was defined:
  neutron firewall-rule-create --protocol any --destination-port 53 --action 
allow

  On L3agent namespace this has been translated in the following iptables rules:
  -A neutron-l3-agent-iv441c58eb2 -j ACCEPT
  -A neutron-l3-agent-ov441c58eb2 -j ACCEPT
  => there is no restriction on the destination port(53), like we could expect 
it !!!

  There is 2 solutions to handle this issue:

  1) Doesn't allow user to create a rule specifing protocol "any" AND a
  source-port/destination-port.

  2) Generating the following rules (like some firewalls do):
  -A neutron-l3-agent-iv441c58eb2 -p tcp -m tcp --dport 53 -j ACCEPT
  -A neutron-l3-agent-iv441c58eb2 -p udp -m udp --dport 53 -j ACCEPT
  -A neutron-l3-agent-ov441c58eb2 -p tcp -m tcp --dport 53 -j ACCEPT
  -A neutron-l3-agent-ov441c58eb2 -p udp -m udp --dport 53 -j ACCEPT
  => TCP and UDP have been completed.

  The source code affected is located in
  neutron/services/firewall/drivers/linux/iptables_fwaas.py  (L268)

  def _port_arg(self, direction, protocol, port):
  if not (protocol in ['udp', 'tcp'] and port):
  return ''
  return '--%s %s' % (direction, port)

  => trunk code is affected too.

  Nota: This is not a real Neutron security vulnerability but it is a
  real security vulnerability for applications living in the Openstack
  cloud... That's why I tagged it as "security vulnerability"

  Regards,

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1209343] Re: LDAP connection code does not provide ldap.set_option(ldap.OPT_X_TLS_CACERTFILE) for ldaps protocol

2014-09-29 Thread Adam Gandelman
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Importance: Undecided => Wishlist

** Changed in: keystone/icehouse
   Status: New => Fix Committed

** Changed in: keystone/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1209343

Title:
  LDAP connection code does not provide
  ldap.set_option(ldap.OPT_X_TLS_CACERTFILE) for ldaps protocol

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Committed

Bug description:
  The HP Enterprise Directory LDAP servers require a ca certificate file
  for ldaps connections. Sample working Python code:

  ldap.set_option(ldap.OPT_X_TLS_CACERTFILE, 
"d:/etc/ssl/certs/hpca2ssG2_ns.cer")
  ldap_client = ldap.initialize(host)
  ldap_client.protocol_version = ldap.VERSION3

  ldap_client.simple_bind_s(binduser,bindpw)

  filter = '(uid=mark.m*)'
  attrs = ['cn', 'mail', 'uid', 'hpStatus']

  r = ldap_client.search_s(base, scope, filter, attrs)

  for dn, entry in r:
  print 'dn=', repr(dn)

  for k in entry.keys():
  print '\t', k, '=', entry[k]

  The current H-2 " keystone/common/ldap/core.py" file only provides
  this ldap.set_option for TLS connections. I have attached a picture of
  a screen shot showing the change I had to make to file core.py to
  enable the "ldap.set_option(ldap.OPT_X_TLS_CACERTFILE,
  tls_cacertfile)" statement to also get executed for ldaps connections.
  Basically I pulled the set_option code out of the "if tls_cacertfile:"
  block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1209343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306835] Re: V3 list users filter by email address throws exception

2014-09-29 Thread Adam Gandelman
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Importance: Undecided => Medium

** Changed in: keystone/icehouse
   Status: New => Fix Committed

** Changed in: keystone/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1306835

Title:
  V3 list users  filter by email address throws exception

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Committed
Status in OpenStack Manuals:
  In Progress

Bug description:
  V3 list_user filter by email throws excpetion. There is no such
  attribute email.

  keystone.common.wsgi): 2014-04-11 23:09:00,422 ERROR type object 'User' has 
no attribute 'email'
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 206, 
in __call__
  result = method(context, **params)
File "/usr/lib/python2.7/dist-packages/keystone/common/controller.py", line 
183, in wrapper
  return f(self, context, filters, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/controllers.py", 
line 284, in list_users
  hints=hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 
52, in wrapper
  return f(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 
189, in wrapper
  return f(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 
328, in list_users
  ref_list = driver.list_users(hints or driver_hints.Hints())
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
227, in wrapper
  return f(self, hints, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/backends/sql.py", 
line 132, in list_users
  user_refs = sql.filter_limit_query(User, query, hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
374, in filter_limit_query
  query = _filter(model, query, hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
326, in _filter
  filter_dict = exact_filter(model, filter_, filter_dict, hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
312, in exact_filter
  if isinstance(getattr(model, key).property.columns[0].type,
  AttributeError: type object 'User' has no attribute 'email'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1306835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313458] Re: v3 catalog not implemented for templated backend

2014-09-29 Thread Adam Gandelman
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Importance: Undecided => Wishlist

** Changed in: keystone/icehouse
   Status: New => Fix Committed

** Changed in: keystone/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1313458

Title:
  v3 catalog not implemented for templated backend

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Committed

Bug description:
  
  The templated backend didn't implement the method to get a v3 catalog. So you 
couldn't get a valid v3 token when the templated catalog backend was configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1313458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-09-29 Thread Adam Gandelman
** Also affects: cinder/icehouse
   Importance: Undecided
   Status: New

** Changed in: cinder/icehouse
 Assignee: (unassigned) => Clark Boylan (cboylan)

** Changed in: cinder/icehouse
   Importance: Undecided => Medium

** Changed in: cinder/icehouse
   Status: New => Fix Committed

** Changed in: cinder/icehouse
Milestone: None => 2014.1.3

** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Importance: Undecided => Medium

** Changed in: keystone/icehouse
   Status: New => Fix Committed

** Changed in: keystone/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Key Management (Barbican):
  Confirmed
Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) icehouse series:
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged
Status in Python client library for Neutron:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Openstack Database (Trove):
  Fix Committed
Status in Web Services Made Easy:
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295128] Re: Error getting keystone related informations when running keystone in httpd

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Status: New => Fix Committed

** Changed in: horizon/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1295128

Title:
  Error getting keystone related informations when running keystone in
  httpd

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  1. Need to deploy keystone on apache: 
http://docs.openstack.org/developer/keystone/apache-httpd.html
  2. Update keystone endpoints to, http://192.168.94.129/keystone/main/v2.0 and 
http://192.168.94.129/keystone/main/v2.0 
  3. Edit openstack_dashboard/local/local_settings.py, update 
OPENSTACK_KEYSTONE_URL = "http://%s/keystone/main/v2.0"; % OPENSTACK_HOST
  4. Visit dashboard, 
   * Error on dashboard: `Error: Unable to retrieve project list.`
   * Error in log:
  Not Found: Not Found (HTTP 404)
  Traceback (most recent call last):
File 
"/opt/stack/horizon/openstack_dashboard/dashboards/admin/overview/views.py", 
line 63, in get_data
  projects, has_more = api.keystone.tenant_list(self.request)
File "/opt/stack/horizon/openstack_dashboard/api/keystone.py", line 266, in 
tenant_list
  tenants = manager.list(limit, marker)
File "/opt/stack/python-keystoneclient/keystoneclient/v2_0/tenants.py", 
line 118, in list
  tenant_list = self._list("/tenants%s" % query, "tenants")
File "/opt/stack/python-keystoneclient/keystoneclient/base.py", line 106, 
in _list
  resp, body = self.client.get(url)
File "/opt/stack/python-keystoneclient/keystoneclient/httpclient.py", line 
578, in get
  return self._cs_request(url, 'GET', **kwargs)
File "/opt/stack/python-keystoneclient/keystoneclient/httpclient.py", line 
575, in _cs_request
  **kwargs)
File "/opt/stack/python-keystoneclient/keystoneclient/httpclient.py", line 
554, in request
  resp = super(HTTPClient, self).request(url, method, **kwargs)
File "/opt/stack/python-keystoneclient/keystoneclient/baseclient.py", line 
21, in request
  return self.session.request(url, method, **kwargs)
File "/opt/stack/python-keystoneclient/keystoneclient/session.py", line 
209, in request
  raise exceptions.from_response(resp, method, url)
  NotFound: Not Found (HTTP 404)

  
  But using the keystoneclient command line everything works fine..
  $ keystone  tenant-list
  +--++-+
  |id|name| enabled |
  +--++-+
  | 9542f4d212064b96addcfbca9fd530ee |   admin|   True  |
  | 5e317523a51745d1a65f4b166b85dd1b |demo|   True  |
  | 70058501677e4c2ea7cef31a7ddbd48d | invisible_to_admin |   True  |
  | 246ef23151354782aa75850cde8501e8 |  service   |   True  |
  +--++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1295128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288859] Re: Load balancer can't choose proper port in multi-network configuration

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Importance: Undecided => Medium

** Changed in: horizon/icehouse
   Status: New => Fix Committed

** Changed in: horizon/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1288859

Title:
  Load balancer can't choose proper port in multi-network configuration

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  If LBaaS functionality enabled and instances has more that one network
  interfaces, horizon incorrectly choose members ports to add in the LB
  pool.

  Steps to reproduce:

  0. nova, neutron with configured LBaaS functions, horizon.
  1. Create 1st network (e.g. net1)
  2. Create 2nd network (e.g. net2)
  3. Create few (e.g. 6) instances with networks attached to both networks.
  4. Create LB pool
  5. Go to member page and click 'add members'
  6. Select all instances from step 3, click add

  Expected result:
  all selected interfaces will be in same network.

  Actual result:
  Some interfaces are selected from net1, some from net2. 

  And there is no way to plug instance to LB pool with proper interface
  via horizon, because add member dialog do not allow to choose port of
  instance.

  Checked on havanna and icehouse-2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1288859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314145] Re: In Containers page, long container/object name can break the page.

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Importance: Undecided => Medium

** Changed in: horizon/icehouse
   Status: New => Fix Committed

** Changed in: horizon/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1314145

Title:
  In Containers page, long container/object name can break the page.

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  In the containers page, if the name of a container is too long, the
  objects table is no longer visible and the table is out of the screen
  (see screenshot).

  Test with this container name :
  
"TESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTES"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1314145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317016] Re: User are not allowed to delete object which the user created under Container

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Status: New => Fix Committed

** Changed in: horizon/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1317016

Title:
  User are not allowed to  delete object which the user created under
  Container

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  Testing step:
  1: create a pseudo-folder object pf1
  2: delete pf1

  Testing result:

  Error: You are not allowed to delete object: pf1

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1317016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347840] Re: Primary Project should stay selected after user added to new project

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Importance: Undecided => Medium

** Changed in: horizon/icehouse
   Status: New => Fix Committed

** Changed in: horizon/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347840

Title:
  Primary Project should stay selected after user added to new project

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  Prereq: multi domain enabled

  == Scenario ==
  1. Have a domain with 2 projects, p1 and p2.
  2. Create userA and set userA's primary project to p1.
  3. Update project members of p2 and add userA as member.  Now, userA is part 
of both projects.
  4. Now go to edit password for userA.  You'll notice on the modal, that the 
Primary Project isn't set.  You have to *reselect* before you can save.  See 
attached image.

  ==> The Primary Project should have stayed as p1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-09-29 Thread Adam Gandelman
** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

** Changed in: glance/icehouse
   Importance: Undecided => Medium

** Changed in: glance/icehouse
   Status: New => Fix Committed

** Changed in: glance/icehouse
Milestone: None => 2014.1.3

** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Key Management (Barbican):
  Confirmed
Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Fix Released
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) icehouse series:
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged
Status in Python client library for Neutron:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Openstack Database (Trove):
  Fix Committed
Status in Web Services Made Easy:
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352919] Re: horizon/workflows/base.py contains add_error() which conflicts with Django 1.7 definition

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Importance: Undecided => Wishlist

** Changed in: horizon/icehouse
   Status: New => Fix Committed

** Changed in: horizon/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1352919

Title:
  horizon/workflows/base.py contains add_error() which conflicts with
  Django 1.7 definition

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  As per the subject, horizon/workflows/base.py contains a definition of
  add_error(). Unfortunately, this now a function name used by Django
  1.7. This conflicts with it, and leads to unit test errors when
  running with Django 1.7 installed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1352919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372416] Re: Test failures due to removed ClientException from Ceilometer client

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Importance: Undecided => Critical

** Changed in: horizon/icehouse
   Status: New => Fix Committed

** Changed in: horizon/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1372416

Title:
  Test failures due to removed ClientException from Ceilometer client

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  The deprecated ClientException was removed from Ceilometer client: 
  
https://github.com/openstack/python-ceilometerclient/commit/09ad1ed7a3109a936f0e1bc9cbc904292607d70c

  However, we are still referencing it in Horizon: 
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/test_data/exceptions.py#L76

  It should be replaced with HTTPException.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1372416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236868] Re: image status set to killed even if has been deleted

2014-09-29 Thread Adam Gandelman
** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

** Changed in: glance/icehouse
   Importance: Undecided => Medium

** Changed in: glance/icehouse
   Status: New => Fix Committed

** Changed in: glance/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1236868

Title:
  image status set to killed even if has been deleted

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed

Bug description:
  This error occurs with the following sequence of steps:

  1. Upload data to an image e.g. cinder upload-to-image
  2. image status is set to 'saving' as data is uploaded
  3. delete image before upload is complete
  4. image status goes to 'deleted' and image is deleted from backend store
  5. fail the upload
  6. image status then goes to 'killed' when it should stay as 'deleted'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1236868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357462] Re: glance cannot find store for scheme mware_datastore

2014-09-29 Thread Adam Gandelman
** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

** Changed in: glance/icehouse
   Importance: Undecided => High

** Changed in: glance/icehouse
   Status: New => Fix Committed

** Changed in: glance/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1357462

Title:
  glance cannot find store for scheme mware_datastore

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed

Bug description:
   I have python-glance-2014.1.2-1.el7ost.noarch

  when configuring

  default_store=vmware_datastore
  known_stores = glance.store.vmware_datastore.Store
  vmware_server_host = 10.34.69.76
  vmware_server_username=root
  vmware_server_password=qum5net
  vmware_datacenter_path="New Datacenter"
  vmware_datastore_name=shared

  glance-api doesn't seem to come up at all.
  glance image-list
  Error communicating with http://172.16.40.9:9292 [Errno 111] Connection 
refused

  there seems to be nothing interesing in the logs. After changing to
  the

default_store=file

glance image-create --disk-format vmdk --container-format bare
  --copy-from
  'http://str-02.rhev/OpenStack/cirros-0.3.1-x86_64-disk.vmdk'
  --name cirros-0.3.1-x86_64-disk.vmdk --is-public true --property
  vmware_disktype="sparse" --property vmware_adaptertype="ide"
  --property vmware_ostype="ubuntu64Guest" --name prdel --store
  vmware_datastore

  or

glance image-create --disk-format vmdk --container-format bare
  --file 'cirros-0.3.1-x86_64-disk.vmdk' --name
  cirros-0.3.1-x86_64-disk.vmdk --is-public true --property
  vmware_disktype="sparse" --property vmware_adaptertype="ide"
  --property vmware_ostype="ubuntu64Guest" --name prdel --store
  vmware_datastore

  the image remains in queued state

  I can see log lines
  2014-08-15 12:38:55.885 24732 DEBUG glance.store [-] Registering store  with schemes ('vsphere',) create_stores 
/usr/lib/python2.7/site-packages/glance/store/__init__.py:208
  2014-08-15 12:39:54.119 24764 DEBUG glance.api.v1.images [-] Store for scheme 
vmware_datastore not found get_store_or_400 
/usr/lib/python2.7/site-packages/glance/api/v1/images.py:1057
  2014-08-15 12:43:31.408 24764 DEBUG glance.api.v1.images 
[eac2ff8d-d55a-4e2c-8006-95beef8a0d7b caffabe3f56e4e5cb5cbeb040224fe69 
77e18ad8a31e4de2ab26f52fb15b3cc1 - - -] Store for scheme vmware_datastore not 
found get_store_or_400 
/usr/lib/python2.7/site-packages/glance/api/v1/images.py:1057

  so it looks like there is inconsistency on the scheme that should be
  used. After hardcoding

STORE_SCHEME = 'vmware_datastore'

  in the

/usr/lib/python2.7/site-packages/glance/store/vmware_datastore.py

  the behaviour changed, but did not improve very much:

glance image-create --disk-format vmdk --container-format bare --file 
'cirros-0.3.1-x86_64-disk.vmdk' --name cirros-0.3.1-x86_64-disk.vmdk 
--is-public true --property vmware_disktype="sparse" --property 
vmware_adaptertype="ide" --property vmware_ostype="ubuntu64Guest" --name 
prdel --store vmware_datastore
  400 Bad Request
  Store for image_id not found: 7edc22ae-f229-4f21-8f7d-fa19a03410be
  (HTTP 400)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1357462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341954] Re: suds client subject to cache poisoning by local attacker

2014-09-29 Thread Adam Gandelman
** Also affects: cinder/icehouse
   Importance: Undecided
   Status: New

** Changed in: cinder/icehouse
   Status: New => Fix Committed

** Changed in: cinder/icehouse
Milestone: None => 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1341954

Title:
  suds client subject to cache poisoning by local attacker

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in Gantt:
  New
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Oslo VMware library for OpenStack projects:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  
  The suds project appears to be largely unmaintained upstream. The default 
cache implementation stores pickled objects to a predictable path in /tmp. This 
can be used by a local attacker to redirect SOAP requests via symlinks or run a 
privilege escalation / code execution attack via a pickle exploit. 

  cinder/requirements.txt:suds>=0.4
  gantt/requirements.txt:suds>=0.4
  nova/requirements.txt:suds>=0.4
  oslo.vmware/requirements.txt:suds>=0.4

  
  The details are available here - 
  https://bugzilla.redhat.com/show_bug.cgi?id=978696
  (CVE-2013-2217)

  Although this is an unlikely attack vector steps should be taken to
  prevent this behaviour. Potential ways to fix this are by explicitly
  setting the cache location to a directory created via
  tempfile.mkdtemp(), disabling cache client.set_options(cache=None), or
  using a custom cache implementation that doesn't load / store pickled
  objects from an insecure location.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1341954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348720] Re: Missing index for expire_reservations

2014-09-29 Thread Adam Gandelman
** Also affects: cinder/icehouse
   Importance: Undecided
   Status: New

** Changed in: cinder/icehouse
   Importance: Undecided => High

** Changed in: cinder/icehouse
   Status: New => Fix Committed

** Changed in: cinder/icehouse
Milestone: None => 2014.1.3

** Changed in: cinder/icehouse
 Assignee: (unassigned) => Vish Ishaya (vishvananda)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348720

Title:
  Missing index for expire_reservations

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  While investigating some database performance problems, we discovered
  that there is no index on deleted for the reservations table. When
  this table gets large, the expire_reservations code will do a full
  table scan and take multiple seconds to complete. Because the expire
  runs on a periodic, it can slow down the master database significantly
  and cause nova or cinder to become extremely slow.

  > EXPLAIN UPDATE reservations SET updated_at=updated_at, 
deleted_at='2014-07-24 22:26:17', deleted=id WHERE reservations.deleted = 0 AND 
reservations.expire < '2014-07-24 22:26:11';
  
++-+--+---+---+-+-+--++--+
  | id | select_type | table| type  | possible_keys | key| key_len 
| ref  | rows  | Extra|
  
++-+--+---+---+-+-+--++--+
  |  1 | SIMPLE  | reservations | index | NULL  | PRIMARY | 4  
| NULL | 950366 | Using where; Using temporary |
  
++-+--+---+---+-+-+--++--+

  An index on (deleted, expire) would be the most efficient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350466] Re: deadlock in scheduler expire reservation periodic task

2014-09-29 Thread Adam Gandelman
** Also affects: cinder/icehouse
   Importance: Undecided
   Status: New

** Changed in: cinder/icehouse
   Importance: Undecided => High

** Changed in: cinder/icehouse
   Status: New => Fix Committed

** Changed in: cinder/icehouse
Milestone: None => 2014.1.3

** Changed in: cinder/icehouse
 Assignee: (unassigned) => Vish Ishaya (vishvananda)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350466

Title:
  deadlock in scheduler expire reservation periodic task

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  http://logs.openstack.org/54/105554/4/check/gate-tempest-dsvm-neutron-
  large-
  ops/45501af/logs/screen-n-sch.txt.gz?level=TRACE#_2014-07-30_16_26_20_158

  
  2014-07-30 16:26:20.158 17209 ERROR nova.openstack.common.periodic_task [-] 
Error during SchedulerManager._expire_reservations: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'UPDATE 
reservations SET updated_at=updated_at, deleted_at=%s, deleted=id WHERE 
reservations.deleted = %s AND reservations.expire < %s' 
(datetime.datetime(2014, 7, 30, 16, 26, 20, 152098), 0, datetime.datetime(2014, 
7, 30, 16, 26, 20, 149665))
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/periodic_task.py", line 198, in 
run_periodic_tasks
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/scheduler/manager.py", line 157, in 
_expire_reservations
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
QUOTAS.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/quota.py", line 1401, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._driver.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/quota.py", line 651, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
db.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/db/api.py", line 1173, in reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return IMPL.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 149, in wrapper
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 3394, in 
reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
reservation_query.soft_delete(synchronize_session=False)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py", line 
694, in soft_delete
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
synchronize_session=synchronize_session)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2690, in 
update
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_op.exec_()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
816, in exec_
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._do_exec()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
913, in _do_exec
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_stmt, params=self.query._params)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py", line 
444, in _wrap
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
_raise_if_deadlock_error(e, self.bind.dialect.name)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py", line 
427, in _raise_if_deadlock_error
  2014-07-

[Yahoo-eng-team] [Bug 1375432] [NEW] Duplicate entry in gitignore

2014-09-29 Thread Matthew Treinish
Public bug reported:

The .gitignore file for nova contains the line for the sample config
file, etc/nova/nova.conf.sample, twice.

** Affects: nova
 Importance: Low
 Assignee: Matthew Treinish (treinish)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375432

Title:
  Duplicate entry in gitignore

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The .gitignore file for nova contains the line for the sample config
  file, etc/nova/nova.conf.sample, twice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >