[Yahoo-eng-team] [Bug 1294973] [NEW] Better error message is required when glance is dead while capturing

2014-03-20 Thread Manjunath
Public bug reported:

1. Bring down the glance
Stopping openstack-glance-api: [  OK  ]
Stopping openstack-glance-registry:[  OK  ]
[root@nimbuswrkl665 images]#
2. Invoke createImage operation for the server.

manjunath@manjunath-ThinkPad-T420:˜/Desktop$ curl -k -i -X POST -H X
-Auth-Token: 6928aacccede46d7921c57fb827cd9be -H Content-Type:
application/json  -d '{createImage: {name: vm1_failed,
metadata: {}}}'
https://nimbuswrkl665.rtp.stglabs.ibm.com/powervc/openstack/compute/v2/dd73d8ab4ea7498e8d103206e37c65b5/servers/93476dc9
-c70e-4b73-af3e-1309fecdaf95/action

Response :

HTTP/1.1 500 Internal Server Error
Date: Mon, 10 Mar 2014 14:34:27 GMT
Content-Type: application/json; charset=UTF-8
X-Compute-Request-Id: req-75d8138c-1d32-4bb7-888f-7aa48bdd2011
Cache-control: no-cache
Pragma: no-cache
Content-Length: 128
Connection: close

{computeFault: {message: The server has either erred or is
incapable of performing the requested operation., code:
500}}manjunath@manjunath-Thimanjunath@manjunath-ThinkPad-T420:˜/Desktop$


Message needs to be Improved, I could see clear message in api.log but it is 
not passing to the REST call.!

2014-03-10 10:34:27.130 4713 ERROR nova.image.glance 
[req-75d8138c-1d32-4bb7-888f-7aa48bdd2011 0 dd73d8ab4ea7498e8d103206e37c65b5] 
Error contacting glance server '9.37.74.232:9292' for 'get', retrying.
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance Traceback (most recent 
call last):
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance   File 
/usr/lib/python2.6/site-packages/nova/image/glance.py, line 211, in call
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance return 
getattr(client.images, method)(*args, **kwargs)
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance   File 
/usr/lib/python2.6/site-packages/glanceclient/v1/images.py, line 114, in get
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance % 
urllib.quote(str(image_id)))
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 289, in 
raw_request
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance return 
self._http_request(url, method, **kwargs)
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 235, in 
_http_request
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance raise 
exc.CommunicationError(message=message)
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance CommunicationError: Error 
communicating with http://9.37.74.232:9292 [Errno 111] ECONNREFUSED
2014-03-10 10:34:27.130 4713 TRACE nova.image.glance
2014-03-10 10:34:27.240 4713 WARNING nova.compute.utils 
[req-75d8138c-1d32-4bb7-888f-7aa48bdd2011 0 dd73d8ab4ea7498e8d103206e37c65b5] 
[instance: 93476dc9-c70e-4b73-af3e-1309fecdaf95] NV-6BF9597 Can't access image 
36f6d470-3a3f-4113-8724-b018280c8f27: NV-BAD1189 Connection to glance host 
9.37.74.232:9292 failed: Error communicating with http://9.37.74.232:9292 
[Errno 111] ECONNREFUSED
2014-03-10 10:34:27.244 4713 ERROR nova.image.glance 
[req-75d8138c-1d32-4bb7-888f-7aa48bdd2011 0 dd73d8ab4ea7498e8d103206e37c65b5] 
Error contacting glance server '9.37.74.232:9292' for 'create', retrying.
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance Traceback (most recent 
call last):
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance   File 
/usr/lib/python2.6/site-packages/nova/image/glance.py, line 211, in call
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance return 
getattr(client.images, method)(*args, **kwargs)
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance   File 
/usr/lib/python2.6/site-packages/glanceclient/v1/images.py, line 253, in 
create
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance 'POST', '/v1/images', 
headers=hdrs, body=image_data)
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 289, in 
raw_request
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance return 
self._http_request(url, method, **kwargs)
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 235, in 
_http_request
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance raise 
exc.CommunicationError(message=message)
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance CommunicationError: Error 
communicating with http://9.37.74.232:9292 [Errno 111] ECONNREFUSED
2014-03-10 10:34:27.244 4713 TRACE nova.image.glance
2014-03-10 10:34:27.255 4713 ERROR nova.api.openstack 
[req-75d8138c-1d32-4bb7-888f-7aa48bdd2011 0 dd73d8ab4ea7498e8d103206e37c65b5] 
NV-A68A08C Caught error: NV-BAD1189 Connection to glance host 9.37.74.232:9292 
failed: Error communicating with http://9.37.74.232:9292 [Errno 111] 
ECONNREFUSED
2014-03-10 10:34:27.255 4713 TRACE nova.api.openstack Traceback (most recent 
call last):
2014-03-10 10:34:27.255 4713 TRACE 

[Yahoo-eng-team] [Bug 1294974] [NEW] Can't easily stop a transaction from yielding

2014-03-20 Thread Kevin Benton
Public bug reported:

Many of the current plugins (e.g. ML2) operate under the assumption that
the greenthread will not cooperatively yield during a transaction. When
this assumption is broken, there is a risk for a mysql/eventlet
deadlock.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294974

Title:
  Can't easily stop a transaction from yielding

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Many of the current plugins (e.g. ML2) operate under the assumption
  that the greenthread will not cooperatively yield during a
  transaction. When this assumption is broken, there is a risk for a
  mysql/eventlet deadlock.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294166] Re: XML test_list_ports_binding_ext_attr does not consider plugin-specific attributes

2014-03-20 Thread Akihiro Motoki
(Woops,... comment posting failed)

Thanks Ken'ichi,

It is not a tempest bug but a neutron bug (specific plugin bug)
In XML specification, if a tag name contains a colon, a part before a colon is 
interpreted as namespace.
In this case it is just an attribute name and it is not intended to define a 
namespace.
This field is specific to NEC plugin and it seems better to rename the 
attribute name without a colon.
Ideally XML serializer in Neutron should detects (or escapes) these errors, but 
at now changing name looks a good solution.


** Changed in: tempest
   Status: New = Invalid

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Importance: Undecided = Low

** Summary changed:

- XML test_list_ports_binding_ext_attr does not consider plugin-specific 
attributes
+ nec plugin fails with XML test_list_ports_binding_ext_attr tempest tests

** Changed in: neutron
 Assignee: (unassigned) = Akihiro Motoki (amotoki)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1294166

Title:
  nec plugin fails with XML test_list_ports_binding_ext_attr tempest
  tests

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  Invalid

Bug description:
  commit 1ec6e18007691c92fc27235c677d11b0fe1c1f6b in tempest adds a
  validation for neutron port binding extension, but the check is too
  strict. binding:profile attribute in port binding extension is
  designed to allow plugin specific field but the check assumes
  binding:profile attribute implemented in ML2 plugin. This leads to
  test failures with neutron plugins which has different keys in
  binding:profile like NEC plugin or Mellanox plugin (though mellanox
  plugin does not test the related test now).

  The failed test is test_list_ports_binding_ext_attr. It only affects
  XML.

tempest.api.network.test_ports.PortsAdminExtendedAttrsIpV6TestXML 
test_list_ports_binding_ext_attr
tempest.api.network.test_ports.PortsAdminExtendedAttrsTestXML 
test_list_ports_binding_ext_attr

  Tempest test should allow plugin-specifc binding:profile attribute. If
  not, it should provide an options to disable these tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1294166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295165] [NEW] lbaas is unreliable in the gate

2014-03-20 Thread Sean Dague
Public bug reported:

the lbaas service is quite unreliable in the gate, and causing a large
number of gate resets.

http://logs.openstack.org/78/81778/1/check/check-tempest-dsvm-
neutron/0a85d7c/console.html#_2014-03-20_13_43_58_324

TRACE  ERROR logs for this run here

http://logs.openstack.org/78/81778/1/check/check-tempest-dsvm-
neutron/0a85d7c/logs/screen-q-lbaas.txt.gz?level=TRACE

regardless, the fact that lbaas is making lots of blind network
namespace calls that are failing (and exploding) seems bad.

** Affects: neutron
 Importance: Critical
 Status: New

** Changed in: neutron
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295165

Title:
  lbaas is unreliable in the gate

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  the lbaas service is quite unreliable in the gate, and causing a large
  number of gate resets.

  http://logs.openstack.org/78/81778/1/check/check-tempest-dsvm-
  neutron/0a85d7c/console.html#_2014-03-20_13_43_58_324

  TRACE  ERROR logs for this run here

  http://logs.openstack.org/78/81778/1/check/check-tempest-dsvm-
  neutron/0a85d7c/logs/screen-q-lbaas.txt.gz?level=TRACE

  regardless, the fact that lbaas is making lots of blind network
  namespace calls that are failing (and exploding) seems bad.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295187] [NEW] Fix typo in lbaas agent exception message

2014-03-20 Thread Eugene Nikanorov
Public bug reported:

Fix typo in lbaas agent exception message

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295187

Title:
  Fix typo in lbaas agent exception message

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Fix typo in lbaas agent exception message

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295186] [NEW] Horizon change password fails when Keystone+LDAP.

2014-03-20 Thread Tzach Shefi
Public bug reported:

Description of problem:

When using Keystone+LDAP, setting\change password on Horizon fails -
Error: Unable to change password.

Version-Release number of selected component (if applicable):
 RHEL 6.5
 openstack-dashboard-2013.2.2-1.el6ost.noarch
 openstack-keystone-2013.2.2-1.el6ost.noarch
 
How reproducible:
Every time


Steps to Reproduce:
1. Build setup
2. Configure keystone to use LDAP
3. Login with user to Horizon, click settings, change password

Actual results:
Can't change password - Error: Unable to change password.

Expected results:
As a user i'd prefer any of the below options rather than a none informative 
Error: Unable to change password. 

* If possible make Horizon/Keystone update LDAP and actually change the 
password.
* If not possible to update LDAP, than notify user LDAP authentication in use, 
please change password on LDAP. Or gray out change password while LDAP in 
use.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1295186

Title:
  Horizon change password fails when Keystone+LDAP.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:

  When using Keystone+LDAP, setting\change password on Horizon fails -
  Error: Unable to change password.

  Version-Release number of selected component (if applicable):
   RHEL 6.5
   openstack-dashboard-2013.2.2-1.el6ost.noarch
   openstack-keystone-2013.2.2-1.el6ost.noarch
   
  How reproducible:
  Every time

  
  Steps to Reproduce:
  1. Build setup
  2. Configure keystone to use LDAP
  3. Login with user to Horizon, click settings, change password

  Actual results:
  Can't change password - Error: Unable to change password.

  Expected results:
  As a user i'd prefer any of the below options rather than a none informative 
Error: Unable to change password. 

  * If possible make Horizon/Keystone update LDAP and actually change the 
password.
  * If not possible to update LDAP, than notify user LDAP authentication in 
use, please change password on LDAP. Or gray out change password while LDAP 
in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1295186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274317] Re: heal_instance_info_cache_interval config is not effective

2014-03-20 Thread Matthew Gilliard
** Changed in: nova
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274317

Title:
  heal_instance_info_cache_interval config is not effective

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  There is configuration item in /etc/nova/nova.conf that controls how
  often the instance info should be updated. By default the value is 60
  seconds. However, the current implementation only uses that value to
  prevent over clocked.  Configure it to a different value in nova.conf
  does not has impact how often the task is executed.

  If I change the code in  /usr/lib/python2.6/site-
  packages/nova/compute/manager.py with the spacing parameter, the
  configured value will be in action. Please fix this bug.

  @periodic_task.periodic_task(spacing=CONF.heal_instance_info_cache_interval)
  def _heal_instance_info_cache(self, context):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295165] Re: lbaas is unreliable in the gate

2014-03-20 Thread Eugene Nikanorov
** Project changed: neutron = tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295165

Title:
  lbaas is unreliable in the gate

Status in Tempest:
  Confirmed

Bug description:
  the lbaas service is quite unreliable in the gate, and causing a large
  number of gate resets.

  http://logs.openstack.org/78/81778/1/check/check-tempest-dsvm-
  neutron/0a85d7c/console.html#_2014-03-20_13_43_58_324

  TRACE  ERROR logs for this run here

  http://logs.openstack.org/78/81778/1/check/check-tempest-dsvm-
  neutron/0a85d7c/logs/screen-q-lbaas.txt.gz?level=TRACE

  regardless, the fact that lbaas is making lots of blind network
  namespace calls that are failing (and exploding) seems bad.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1295165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295212] [NEW] Revoke token intermittently dumps stacktrace - Icehouse M3

2014-03-20 Thread Haneef Ali
Public bug reported:

Revoke token intermittently dumps stack trace.  I don't see remove
method in RevokeTree object.  May be I'm missing something

(keystone.common.wsgi): 2014-03-20 03:17:55,054 ERROR 'RevokeTree' object has 
no attribute 'remove'
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 205, in 
__call__
result = method(context, **params)
  File /usr/lib/python2.7/dist-packages/keystone/auth/controllers.py, line 
316, in authenticate_for_token
self.authenticate(context, auth_info, auth_context)
  File /usr/lib/python2.7/dist-packages/keystone/auth/controllers.py, line 
416, in authenticate
auth_context)
  File /usr/lib/python2.7/dist-packages/keystone/auth/plugins/token.py, line 
39, in authenticate
response = self.provider.validate_token(token_id)
  File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 118, 
in validate_token
self._is_valid_token(token)
  File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 227, 
in _is_valid_token
self.check_revocation(token)
  File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 156, 
in check_revocation
return self.check_revocation_v3(token)
  File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 149, 
in check_revocation_v3
self.revoke_api.check_token(token_values)
  File /usr/lib/python2.7/dist-packages/keystone/contrib/revoke/core.py, line 
190, in check_token
self._cache.synchronize_revoke_map(self.driver)
  File /usr/lib/python2.7/dist-packages/keystone/contrib/revoke/core.py, line 
79, in synchronize_revoke_map
self.revoke_map.remove(e)
AttributeError: 'RevokeTree' object has no attribute 'remove'

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1295212

Title:
  Revoke token intermittently dumps stacktrace - Icehouse M3

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Revoke token intermittently dumps stack trace.  I don't see remove
  method in RevokeTree object.  May be I'm missing something

  (keystone.common.wsgi): 2014-03-20 03:17:55,054 ERROR 'RevokeTree' object has 
no attribute 'remove'
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 205, 
in __call__
  result = method(context, **params)
File /usr/lib/python2.7/dist-packages/keystone/auth/controllers.py, line 
316, in authenticate_for_token
  self.authenticate(context, auth_info, auth_context)
File /usr/lib/python2.7/dist-packages/keystone/auth/controllers.py, line 
416, in authenticate
  auth_context)
File /usr/lib/python2.7/dist-packages/keystone/auth/plugins/token.py, 
line 39, in authenticate
  response = self.provider.validate_token(token_id)
File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 
118, in validate_token
  self._is_valid_token(token)
File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 
227, in _is_valid_token
  self.check_revocation(token)
File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 
156, in check_revocation
  return self.check_revocation_v3(token)
File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 
149, in check_revocation_v3
  self.revoke_api.check_token(token_values)
File /usr/lib/python2.7/dist-packages/keystone/contrib/revoke/core.py, 
line 190, in check_token
  self._cache.synchronize_revoke_map(self.driver)
File /usr/lib/python2.7/dist-packages/keystone/contrib/revoke/core.py, 
line 79, in synchronize_revoke_map
  self.revoke_map.remove(e)
  AttributeError: 'RevokeTree' object has no attribute 'remove'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1295212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295214] [NEW] LbaaS agent scheduling error exposes implementation

2014-03-20 Thread Eugene Nikanorov
Public bug reported:

Right now when there is no active lbaas agent for HAProxy driver (or
other agent-based driver), the following error is returned to the
client:

No eligible loadbalancer agent found for pool pool_id

We need to return some more generic error to the user, skipping the
notion of agent.

Also, pool will remain in PENDING_CREATE state, while it probably should
move to an ERROR state.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New


** Tags: lbaas

** Description changed:

  Right now when there is no active lbaas agent for HAProxy driver (or
  other agent-based driver), the following error is returned to the
  client:
  
  No eligible loadbalancer agent found for pool pool_id
  
  We need to return some more generic error to the user, skipping the
  notion of agent.
+ 
+ Also, pool will remain in PENDING_CREATE state, while it probably should
+ move to an ERROR state.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295214

Title:
  LbaaS agent scheduling error exposes implementation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Right now when there is no active lbaas agent for HAProxy driver (or
  other agent-based driver), the following error is returned to the
  client:

  No eligible loadbalancer agent found for pool pool_id

  We need to return some more generic error to the user, skipping the
  notion of agent.

  Also, pool will remain in PENDING_CREATE state, while it probably
  should move to an ERROR state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295223] [NEW] seedfrom cmdline option broken in 0.7.5

2014-03-20 Thread Scott Moser
Public bug reported:

the ability to pass in 'seedfrom' on the kernel command line to the
nocloud datasource was broken with the vendor-data changes.

The fix is simple.


recreate is just bo boot a image with 'ds=nocloud-net;seedfrom=http://.'

** Affects: cloud-init
 Importance: Medium
 Status: Fix Committed

** Affects: cloud-init (Ubuntu)
 Importance: High
 Status: In Progress

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1295223

Title:
  seedfrom cmdline option broken in 0.7.5

Status in Init scripts for use on cloud images:
  Fix Committed
Status in “cloud-init” package in Ubuntu:
  In Progress

Bug description:
  the ability to pass in 'seedfrom' on the kernel command line to the
  nocloud datasource was broken with the vendor-data changes.

  The fix is simple.

  
  recreate is just bo boot a image with 'ds=nocloud-net;seedfrom=http://.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1295223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295212] Re: Revoke token intermittently dumps stacktrace - Icehouse M3

2014-03-20 Thread Haneef Ali
Looks like this is fixed now in upstream on 3/8 by Morgan

** Changed in: keystone
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1295212

Title:
  Revoke token intermittently dumps stacktrace - Icehouse M3

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  Revoke token intermittently dumps stack trace.  I don't see remove
  method in RevokeTree object.  May be I'm missing something

  (keystone.common.wsgi): 2014-03-20 03:17:55,054 ERROR 'RevokeTree' object has 
no attribute 'remove'
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 205, 
in __call__
  result = method(context, **params)
File /usr/lib/python2.7/dist-packages/keystone/auth/controllers.py, line 
316, in authenticate_for_token
  self.authenticate(context, auth_info, auth_context)
File /usr/lib/python2.7/dist-packages/keystone/auth/controllers.py, line 
416, in authenticate
  auth_context)
File /usr/lib/python2.7/dist-packages/keystone/auth/plugins/token.py, 
line 39, in authenticate
  response = self.provider.validate_token(token_id)
File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 
118, in validate_token
  self._is_valid_token(token)
File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 
227, in _is_valid_token
  self.check_revocation(token)
File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 
156, in check_revocation
  return self.check_revocation_v3(token)
File /usr/lib/python2.7/dist-packages/keystone/token/provider.py, line 
149, in check_revocation_v3
  self.revoke_api.check_token(token_values)
File /usr/lib/python2.7/dist-packages/keystone/contrib/revoke/core.py, 
line 190, in check_token
  self._cache.synchronize_revoke_map(self.driver)
File /usr/lib/python2.7/dist-packages/keystone/contrib/revoke/core.py, 
line 79, in synchronize_revoke_map
  self.revoke_map.remove(e)
  AttributeError: 'RevokeTree' object has no attribute 'remove'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1295212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295223] Re: seedfrom cmdline option broken in 0.7.5

2014-03-20 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5~bzr970-0ubuntu1

---
cloud-init (0.7.5~bzr970-0ubuntu1) trusty; urgency=medium

  * New upstream snapshot.
* fix NoCloud and seedfrom on the kernel command line (LP: #1295223)
 -- Scott Moser smo...@ubuntu.com   Thu, 20 Mar 2014 12:35:58 -0400

** Changed in: cloud-init (Ubuntu)
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1295223

Title:
  seedfrom cmdline option broken in 0.7.5

Status in Init scripts for use on cloud images:
  Fix Committed
Status in “cloud-init” package in Ubuntu:
  Fix Released

Bug description:
  the ability to pass in 'seedfrom' on the kernel command line to the
  nocloud datasource was broken with the vendor-data changes.

  The fix is simple.

  
  recreate is just bo boot a image with 'ds=nocloud-net;seedfrom=http://.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1295223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295261] [NEW] test_v3_os_revoke.OSRevokeTests: invalid event issued_before time; Too early

2014-03-20 Thread Dolph Mathews
Public bug reported:

This occurred in a gate run (lost the link for the moment):

FAIL: 
keystone.tests.test_v3_os_revoke.OSRevokeTests.test_disabled_project_in_list
tags: worker-1
--
pythonlogging:'': {{{
Adding cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
Callback: `keystone.contrib.revoke.core.Manager._trust_callback` subscribed to 
event `identity.OS-TRUST:trust.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._consumer_callback` subscribed 
to event `identity.OS-OAUTH1:consumer.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._access_token_callback` 
subscribed to event `identity.OS-OAUTH1:access_token.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._role_callback` subscribed to 
event `identity.role.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.disabled`.
Callback: `keystone.contrib.revoke.core.Manager._project_callback` subscribed 
to event `identity.project.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._project_callback` subscribed 
to event `identity.project.disabled`.
Callback: `keystone.contrib.revoke.core.Manager._domain_callback` subscribed to 
event `identity.domain.disabled`.
CACHE_SET: Key: 'bed0a2f296a94d3598098bca72c3c2e3bef836b9' Value: 
({'enabled': True, 'id': 'a1c7e9c7c24e4ed0992b6c1c71a715df', 'name': 
'e0c8292cff164d0cba7eb93692fa65fc', 'description': 
'46be6831b7934ae086f4e2237b00e73c'}, {'v': 1, 'ct': 1395322845.632957})
CACHE_SET: Key: '60cee77d1bb7192469a3c40b86e5b33e2fd7ac19' Value: 
({'enabled': True, 'id': 'a1c7e9c7c24e4ed0992b6c1c71a715df', 'name': 
'e0c8292cff164d0cba7eb93692fa65fc', 'description': 
'46be6831b7934ae086f4e2237b00e73c'}, {'v': 1, 'ct': 1395322845.633679})
found extension EntryPoint.parse('qpid = 
oslo.messaging._drivers.impl_qpid:QpidDriver')
found extension EntryPoint.parse('zmq = 
oslo.messaging._drivers.impl_zmq:ZmqDriver')
found extension EntryPoint.parse('kombu = 
oslo.messaging._drivers.impl_rabbit:RabbitDriver')
found extension EntryPoint.parse('rabbit = 
oslo.messaging._drivers.impl_rabbit:RabbitDriver')
found extension EntryPoint.parse('fake = 
oslo.messaging._drivers.impl_fake:FakeDriver')
found extension EntryPoint.parse('log = 
oslo.messaging.notify._impl_log:LogDriver')
found extension EntryPoint.parse('messagingv2 = 
oslo.messaging.notify._impl_messaging:MessagingV2Driver')
found extension EntryPoint.parse('noop = 
oslo.messaging.notify._impl_noop:NoOpDriver')
found extension EntryPoint.parse('routing = 
oslo.messaging.notify._impl_routing:RoutingDriver')
found extension EntryPoint.parse('test = 
oslo.messaging.notify._impl_test:TestDriver')
found extension EntryPoint.parse('messaging = 
oslo.messaging.notify._impl_messaging:MessagingDriver')
CACHE_SET: Key: '5ec78fa245b6d4094510876ae4afc7435c60cbf4' Value: 
({'description': '8c27b380e06d4af8be1f3b3fa8916a13', 'enabled': True, 'id': 
'07746a4e1979445182c96eba082d593b', 'name': '7fc4100921d84f05ac2738e7d25c3574', 
'domain_id': 'a1c7e9c7c24e4ed0992b6c1c71a715df'}, {'v': 1, 'ct': 
1395322845.641914})
CACHE_SET: Key: 'e35808a841485cda6d2b42cb870bfb11261b0e46' Value: 
({'description': '8c27b380e06d4af8be1f3b3fa8916a13', 'enabled': True, 'id': 
'07746a4e1979445182c96eba082d593b', 'name': '7fc4100921d84f05ac2738e7d25c3574', 
'domain_id': 'a1c7e9c7c24e4ed0992b6c1c71a715df'}, {'v': 1, 'ct': 
1395322845.642477})
CACHE_GET: Key: 'bed0a2f296a94d3598098bca72c3c2e3bef836b9' Value: 
({'enabled': True, 'id': 'a1c7e9c7c24e4ed0992b6c1c71a715df', 'name': 
'e0c8292cff164d0cba7eb93692fa65fc', 'description': 
'46be6831b7934ae086f4e2237b00e73c'}, {'v': 1, 'ct': 1395322845.632957})
CACHE_SET: Key: '935a80a8b7b81bc94c1c17864dd103a9fb93a015' Value: 
({'description': '7bc81fd73cf04d6a82760ebcbc3ffaa2', 'enabled': True, 'id': 
'678d7e87fb794c3e941adf5294b13ea6', 'name': 'd8debaadff324df9870055e2ea07ea4b', 
'domain_id': 'default'}, {'v': 1, 'ct': 1395322845.717836})
CACHE_SET: Key: '2ecc52c74d45fbd9344d1d5453f7669bccafbf3a' Value: 
({'description': '7bc81fd73cf04d6a82760ebcbc3ffaa2', 'enabled': True, 'id': 
'678d7e87fb794c3e941adf5294b13ea6', 'name': 'd8debaadff324df9870055e2ea07ea4b', 
'domain_id': 'default'}, {'v': 1, 'ct': 1395322845.718511})
CACHE_GET: Key: 'e45f4dc1a9bd1a59610ed5aa0db40470f719a2c3' Value: 
dogpile.cache.api.NoValue object at 0x647be10
NeedRegenerationException
no value, waiting for create lock
value creation lock dogpile.cache.region._LockWrapper object at 0x89c2e10 
acquired
CACHE_GET: Key: 'e45f4dc1a9bd1a59610ed5aa0db40470f719a2c3' Value: 
dogpile.cache.api.NoValue object at 0x7cda9d0
Calling creation function
CACHE_SET: Key: 'e45f4dc1a9bd1a59610ed5aa0db40470f719a2c3' Value: 
({'enabled': True, 'id': u'default', 'name': u'Default', u'description': 
u'Owns users and tenants (i.e. projects) 

[Yahoo-eng-team] [Bug 1251792] Re: infinite recursion when deleting an instance with no network interfaces

2014-03-20 Thread Alan Pevec
stable/havana was mistakenly marked as released while merged patch 58471 only 
had Related-bug: 1251792
https://review.openstack.org/57042 needs to be backported to fix it in Havana.

** Changed in: nova/havana
Milestone: 2013.2.1 = None

** Changed in: nova/havana
   Status: Fix Released = New

** Changed in: nova/havana
 Assignee: Armando Migliaccio (armando-migliaccio) = (unassigned)

** Changed in: nova/havana
   Status: New = Confirmed

** Changed in: nova/havana
 Assignee: (unassigned) = Aaron Rosen (arosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251792

Title:
  infinite recursion when deleting an instance with no network
  interfaces

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Confirmed

Bug description:
  In some situations when an instance has no network information (a
  phrase that I'm using loosely), deleting the instance results in
  infinite recursion. The stack looks like this:

  2013-11-15 18:50:28.995 DEBUG nova.network.neutronv2.api 
[req-28f48294-0877-4f09-bcc1-7595dbd4c15a demo demo]   File 
/usr/lib/python2.7/dist-packages/eventlet/greenpool.py, line 80, in 
_spawn_n_impl
  func(*args, **kwargs)
File /opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 461, in 
_process_data
  **args)
File /opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 354, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/exception.py, line 73, in wrapped
  return f(self, context, *args, **kw)
File /opt/stack/nova/nova/compute/manager.py, line 230, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 295, in 
decorated_function
  function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 259, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1984, in 
terminate_instance
  do_terminate_instance(instance, bdms)
File /opt/stack/nova/nova/openstack/common/lockutils.py, line 248, in 
inner
  return f(*args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1976, in 
do_terminate_instance
  reservations=reservations)
File /opt/stack/nova/nova/hooks.py, line 105, in inner
  rv = f(*args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1919, in 
_delete_instance
  self._shutdown_instance(context, db_inst, bdms)
File /opt/stack/nova/nova/compute/manager.py, line 1829, in 
_shutdown_instance
  network_info = self._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/compute/manager.py, line 868, in 
_get_instance_nw_info
  instance)
File /opt/stack/nova/nova/network/neutronv2/api.py, line 449, in 
get_instance_nw_info
  result = self._get_instance_nw_info(context, instance, networks)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)

  RECURSION STARTS HERE

File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)

  ... REPEATS AD NAUSEUM ...

File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 64, in wrapper
  nw_info=res)
File /opt/stack/nova/nova/network/api.py, line 77, in 
update_instance_cache_with_nw_info
  nw_info = api._get_instance_nw_info(context, instance)
File /opt/stack/nova/nova/network/api.py, line 49, in wrapper
  res = f(self, context, *args, **kwargs)
File /opt/stack/nova/nova/network/neutronv2/api.py, line 459, in 
_get_instance_nw_info
  LOG.debug('%s', ''.join(traceback.format_stack()))

  Here's a step-by-step explanation of how the infinite recursion
  arises:

  1. somebody calls nova.network.neutronv2.api.API.get_instance_nw_info

  2. in the above call, the network info is successfully retrieved as
  result = self._get_instance_nw_info(context, instance, networks)

  3. however, since the instance has no network information, result is
  the empty list (i.e., [])

  4. the result is put in the cache by calling
  nova.network.api.update_instance_cache_with_nw_info

  5. update_instance_cache_with_nw_info is supposed to add the result to
  the cache, but due to a bug in update_instance_cache_with_nw_info, it
  recursively calls api.get_instance_nw_info, which brings us back to
  

[Yahoo-eng-team] [Bug 1295280] [NEW] Dependency Injection always injects optional after import

2014-03-20 Thread Morgan Fainberg
Public bug reported:

Dependency Injection is currently always loading optional dependencies
because they are registered in a _factories list on import, this list is
never cleared. While in practice production wont see this issue,
however, for test cases this is a broken mechanic. Tests cannot clear
the _Factories list because import only occurs once.

** Affects: keystone
 Importance: Low
 Status: Triaged

** Changed in: keystone
   Importance: Undecided = Medium

** Changed in: keystone
   Importance: Medium = Low

** Changed in: keystone
   Status: New = Confirmed

** Changed in: keystone
   Status: Confirmed = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1295280

Title:
  Dependency Injection always injects optional after import

Status in OpenStack Identity (Keystone):
  Triaged

Bug description:
  Dependency Injection is currently always loading optional dependencies
  because they are registered in a _factories list on import, this list
  is never cleared. While in practice production wont see this issue,
  however, for test cases this is a broken mechanic. Tests cannot clear
  the _Factories list because import only occurs once.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1295280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294409] Re: Create User form fields are pre-filled

2014-03-20 Thread Cindy Lu
Hi Facundo,

Indeed, I no longer see the problem today.  Thanks for looking into it
though. :)

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1294409

Title:
  Create User form fields are pre-filled

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When I go to Admin  Users  Create User, I see that the Email and
  Password field are pre-filled and highlighted in yellow.  This
  behavior is unexpected.

  Please see attached image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1294409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295281] [NEW] py26 tests take too long to run

2014-03-20 Thread Mark McClain
Public bug reported:

Py26 Unit Test can take over an hour to execute on 4 core machines.
They should be refactored to avoid duplicate tests.

** Affects: neutron
 Importance: Medium
 Assignee: Mark McClain (markmcclain)
 Status: Triaged

** Changed in: neutron
   Importance: Critical = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295281

Title:
  py26 tests take too long to run

Status in OpenStack Neutron (virtual network service):
  Triaged

Bug description:
  Py26 Unit Test can take over an hour to execute on 4 core machines.
  They should be refactored to avoid duplicate tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293794] Re: memcached_servers timeout causes poor API response time

2014-03-20 Thread sahid
Actually after some investigation it looks like we use the version 1.48
and this version handles a param 'socket_timeout' in the client
constructor.

We can add a option to configure it.

** Changed in: nova
   Importance: Undecided = Wishlist

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
 Assignee: (unassigned) = sahid (sahid-ferdjaoui)

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293794

Title:
  memcached_servers timeout causes poor API response time

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In nova.conf, when configured for HA by setting the memcached_servers
  parameter to several memcached servers in the nova API cluster, e.g.:

  memcached_servers=192.168.50.11:11211,192.168.50.12:11211,192.168.50.13:11211

  If there are memcached servers on this list that are down, the time it
  takes to complete Nova API requests increases from  1 second to 3-6
  seconds.

  It seems to me that Nova should protect itself from such performance
  degradation in an HA scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293794/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1105445] Re: Instance 'start' will fail after hypervisor reboot

2014-03-20 Thread Thang Pham
** Changed in: nova
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1105445

Title:
  Instance 'start' will fail after hypervisor reboot

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Simple to reproduce this with libvirt.  Let's assume there' s a single
  VM running on the hypervisor in question.  Gracefully shutdown the
  instance in question via the guest OS and wait for the power stat to
  reflect that it is shutdown.  Then reboot the hypervisor, then attempt
  to call 'nova start uuid'.  This operation will fail, because the
  VIFs, block devices, etc are all missing.

  This comes down to us calling _create_domain() rather than
  _create_domain_and_network() within the libvirt driver.  The compute
  manager needs to pass more information into driver.power_on, so that
  we can rebuild the VIFs and block connections.  Basically, similar
  treatment to what we've already done for driver.resume(),
  driver.reboot(), etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1105445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293407] Re: tempest is missing api_schema file

2014-03-20 Thread Joe Gordon
this doesn't sound like a nova bug, perhaps its a tempest issue?

** Also affects: tempest
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293407

Title:
  tempest is missing api_schema file

Status in Tempest:
  New

Bug description:
  There's no such file in my system or git repo:
  /opt/stack/tempest/tempest/api/compute/api_schema

  Error Message

  No such file /opt/stack/tempest/tempest/api/compute/api_schema
    begin captured logging  
  tempest.test: DEBUG: Open schema file:
  /opt/stack/tempest/etc/schemas/compute/flavors/flavors_list.json
  tempest.test: DEBUG: {u'url': u'flavors/detail', u'http-method':
  u'GET', u'name': u'list-flavors-with-detail', u'json-schema':
  {u'type': u'object', u'properties': {u'minRam': {u'type': u'integer',
  u'results': {u'gen_none': 400, u'gen_string': 400}}, u'minDisk':
  {u'type': u'integer', u'results': {u'gen_none': 400, u'gen_string':
  400} -  end captured logging 
  -

  Stacktrace

  Traceback (most recent call last):
File /usr/lib64/python2.6/unittest.py, line 278, in run
  testMethod()
File 
/usr/lib/python2.6/site-packages/nose-1.1.2-py2.6.egg/nose/failure.py, line 
39, in runTest
  raise self.exc_class(self.exc_val)
  OSError: No such file /opt/stack/tempest/tempest/api/compute/api_schema
    begin captured logging  
  tempest.test: DEBUG: Open schema file: 
/opt/stack/tempest/etc/schemas/compute/flavors/flavors_list.json
  tempest.test: DEBUG: {u'url': u'flavors/detail', u'http-method': u'GET', 
u'name': u'list-flavors-with-detail', u'json-schema': {u'type': u'object', 
u'properties': {u'minRam': {u'type': u'integer', u'results': {u'gen_none': 400, 
u'gen_string': 400}}, u'minDisk': {u'type': u'integer', u'results': 
{u'gen_none': 400, u'gen_string': 400}
  -  end captured logging  -

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1293407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261510] Re: Instance fails to spawn in tempest tests

2014-03-20 Thread Alan Pevec
2013.1.5 is Grizzly so I've removed it and added Havana task.

** Changed in: neutron
Milestone: 2013.1.5 = None

** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Importance: Undecided = High

** Changed in: neutron/havana
   Status: New = Triaged

** Changed in: neutron/havana
 Assignee: (unassigned) = Mark McClain (markmcclain)

** Changed in: neutron
   Status: Triaged = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261510

Title:
  Instance fails to spawn in tempest tests

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in neutron havana series:
  Triaged
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  This happened only 3 times in the past 12 hours, so nothing to worry
  about so far.

  Logstash query for the exact failure in [1] available at [2]
  I am also seeing more Timeout waiting for thing errors (not the same 
condition as bug 1254890, which affects the large_ops job and is due to 
nova/neutron chatty interface). Logstash query for this at [3] (13 hits in past 
12 hours). I think they might have the same root cause.

  
  [1] 
http://logs.openstack.org/22/62322/2/check/check-tempest-dsvm-neutron-isolated/cce7146
  [2] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImZhaWxlZCB0byByZWFjaCBBQ1RJVkUgc3RhdHVzXCIgQU5EICBcIkN1cnJlbnQgc3RhdHVzOiBCVUlMRFwiIEFORCBcIkN1cnJlbnQgdGFzayBzdGF0ZTogc3Bhd25pbmdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNDMyMDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg3MjIzNzQ0Mjk2fQ==
  [3] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogVGltZWQgb3V0IHdhaXRpbmcgZm9yIHRoaW5nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4NzIyMzg2Mjg1MH0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1200231] Re: Nova test suite breakage.

2014-03-20 Thread Alan Pevec
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Changed in: nova/grizzly
 Assignee: (unassigned) = Xavier Queralt (xqueralt)

** Changed in: nova/grizzly
   Importance: Undecided = High

** Changed in: nova/grizzly
   Status: New = Fix Committed

** Changed in: nova/grizzly
Milestone: None = 2013.1.5

** Tags removed: in-stable-grizzly

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1200231

Title:
  Nova test suite breakage.

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed

Bug description:
  FAIL: nova.tests.test_quota.QuotaIntegrationTestCase.test_too_many_addresses
  tags: worker-5
  --
  Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  Loading network driver 'nova.network.linux_net'
  Starting network node (version 2013.2)
  Quota exceeded for admin, tried to allocate floating IP
  }}}

  Traceback (most recent call last):
File /tmp/buildd/nova-2013.2.a1884.gb14f9cd/nova/tests/test_quota.py, 
line 130, in test_too_many_addresses
  db.floating_ip_destroy(context.get_admin_context(), address)
File /tmp/buildd/nova-2013.2.a1884.gb14f9cd/nova/db/api.py, line 288, in 
floating_ip_destroy
  return IMPL.floating_ip_destroy(context, address)
File /tmp/buildd/nova-2013.2.a1884.gb14f9cd/nova/db/sqlalchemy/api.py, 
line 120, in wrapper
  return f(*args, **kwargs)
File /tmp/buildd/nova-2013.2.a1884.gb14f9cd/nova/db/sqlalchemy/api.py, 
line 790, in floating_ip_destroy
  filter_by(address=address).\
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 1245, 
in filter_by
  for key, value in kwargs.iteritems()]
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 
278, in __eq__
  return self.operate(eq, other)
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 
252, in operate
  return op(self.comparator, *other, **kwargs)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 
278, in __eq__
  return self.operate(eq, other)
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/properties.py, line 
212, in operate
  return op(self.__clause_element__(), *other, **kwargs)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/util.py, line 490, 
in __eq__
  return self.__element.__class__.__eq__(self, other)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 
278, in __eq__
  return self.operate(eq, other)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 
2300, in operate
  return op(self.comparator, *other, **kwargs)
File /usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, 
line 612, in __get__
  obj.__dict__[self.__name__] = result = self.fget(obj)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 
2286, in comparator
  return self.type.comparator_factory(self)
File /usr/lib/python2.7/dist-packages/sqlalchemy/types.py, line 629, in 
comparator_factory
  {})
  TypeError: Cannot create a consistent method resolution
  order (MRO) for bases TDComparator, Comparator

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1200231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218878] Re: GroupAffinityFilter and GroupAntiAffinityFilter filters are broken

2014-03-20 Thread Alan Pevec
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Tags removed: in-stable-grizzly

** Changed in: nova/grizzly
   Status: New = Fix Committed

** Changed in: nova/grizzly
   Importance: Undecided = High

** Changed in: nova/grizzly
 Assignee: (unassigned) = Yaguang Tang (heut2008)

** Changed in: nova/grizzly
Milestone: None = 2013.1.5

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218878

Title:
  GroupAffinityFilter and GroupAntiAffinityFilter filters are broken

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed

Bug description:
  My test environment has 2 compute nodes: compute1 and compute3. First, I 
launch 1 instance (not being tied to any group) on each node:
  $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name local 
--availability-zone nova:compute1 vm-compute1-nogroup
  $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name local 
--availability-zone nova:compute3 vm-compute3-nogroup

  So far so good, everything's active:
  $ nova list
  
+--+-+++-+--+
  | ID   | Name| Status | Task 
State | Power State | Networks |
  
+--+-+++-+--+
  | 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE | None  
 | Running | private=10.0.0.3 |
  | c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE | None  
 | Running | private=10.0.0.4 |
  
+--+-+++-+--+

  Then I try to launch one instance in group 'foo' but it fails:
  $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name local 
--availability-zone nova:compute3 vm-compute3-nogroup
  $ nova list
  
+--+-+++-+--+
  | ID   | Name| Status | Task 
State | Power State | Networks |
  
+--+-+++-+--+
  | 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE | None  
 | Running | private=10.0.0.3 |
  | c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE | None  
 | Running | private=10.0.0.4 |
  | 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  | None  
 | NOSTATE |  |
  
+--+-+++-+--+

  I've pasted the scheduler logs [1] and my nova.conf file [2]. As you
  will see, the log message is there but it looks like group_hosts() [3]
  is returning all my hosts instead of only the ones that run instances
  from the group.

  [1] http://paste.openstack.org/show/45672/
  [2] http://paste.openstack.org/show/45671/
  [3] 
https://github.com/openstack/nova/blob/60a91f475a352e5e86bbd07b510cb32874110fef/nova/scheduler/driver.py#L137

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244355] Re: needs to use add-apt-repository for cloud-archive:string

2014-03-20 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.6.3-0ubuntu1.11

---
cloud-init (0.6.3-0ubuntu1.11) precise-proposed; urgency=low

  * support apt-add-archive with 'cloud-archive:' format.  (LP: #1244355)
 -- jrw...@xmtp.net (Jay R. Wren)   Thu, 30 Jan 2014 20:57:09 +

** Changed in: cloud-init (Ubuntu Precise)
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1244355

Title:
  needs to use add-apt-repository for cloud-archive:string

Status in Init scripts for use on cloud images:
  Fix Released
Status in “cloud-init” package in Ubuntu:
  Fix Released
Status in “cloud-init” source package in Precise:
  Fix Released

Bug description:
  we added support for cloud-archive:tools and cloud-archive:havana and
  such.

  cloud-init needs to know that it can call 'add-apt-repository' if the
  input is 'cloud-archive:'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1244355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270654] Re: test_different_fname_concurrency flakey fail

2014-03-20 Thread Alan Pevec
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Changed in: nova/grizzly
   Status: New = Fix Committed

** Changed in: nova/grizzly
Milestone: None = 2013.1.5

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270654

Title:
  test_different_fname_concurrency flakey fail

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Looks like test_different_fname_concurrency has an intermittent fail

  ft1.9289: 
nova.tests.virt.libvirt.test_libvirt.CacheConcurrencyTestCase.test_different_fname_concurrency_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File nova/tests/virt/libvirt/test_libvirt.py, line 319, in 
test_different_fname_concurrency
  self.assertTrue(done2.ready())
File /usr/lib/python2.7/unittest/case.py, line 420, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true

  Full logs here: http://logs.openstack.org/91/58191/4/check/gate-nova-
  python27/413d398/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242597] Re: [OSSA 2013-032] Keystone trust circumvention through EC2-style tokens (CVE-2013-6391)

2014-03-20 Thread Alan Pevec
** Tags removed: in-stable-grizzly

** Also affects: keystone/grizzly
   Importance: Undecided
   Status: New

** Changed in: keystone/grizzly
   Status: New = Fix Committed

** Changed in: keystone/grizzly
Milestone: None = 2013.1.5

** Changed in: keystone/grizzly
   Importance: Undecided = Critical

** Changed in: keystone/grizzly
 Assignee: (unassigned) = Dolph Mathews (dolph)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1242597

Title:
  [OSSA 2013-032] Keystone trust circumvention through EC2-style tokens
  (CVE-2013-6391)

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone grizzly series:
  Fix Committed
Status in Keystone havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  So I finally got around to investigating the scenario I mentioned in
  https://review.openstack.org/#/c/40444/, and unfortunately it seems
  that the ec2tokens API does indeed provide a way to circumvent the
  role delegation provided by trusts, and obtain all the roles of the
  trustor user, not just those explicitly delegated.

  Steps to reproduce:
  - Trustor creates a trust delegating a subset of roles
  - Trustee gets a token scoped to that trust
  - Trustee creates an ec2-keypair
  - Trustee makes a request to the ec2tokens API, to validate a signature 
created with the keypair
  - ec2tokens API returns a new token, which is not scoped to the trust and 
enables access to all the trustor's roles.

  I can provide some test code which demonstrates the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1242597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177830] Re: [OSSA 2013-012] Unchecked qcow2 root disk sizes

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Released = Fix Committed

** Changed in: nova/grizzly
Milestone: 2013.1.2 = 2013.1.5

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1177830

Title:
  [OSSA 2013-012] Unchecked qcow2 root disk sizes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Currently there's no check on the root disk raw sizes. A user can
  create qcow2 images with any size and upload it to glance and spawn
  instances off this file. The raw backing file created in the compute
  node will be small at first due to it being a sparse file, but will
  grow as data is written to it. This can cause the following issues.

  1. Bypass storage quota restrictions
  2. Overrun compute host disk space

  This was reproduced in Devstack using recent trunk d7e4692.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1177830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1207064] Re: VMWare : Disabling linked clone does not cache images on the datastore

2014-03-20 Thread Alan Pevec
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Changed in: nova/grizzly
   Status: New = Fix Committed

** Changed in: nova/grizzly
Milestone: None = 2013.1.5

** Changed in: nova/grizzly
   Importance: Undecided = Medium

** Changed in: nova/grizzly
 Assignee: (unassigned) = Gary Kotton (garyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1207064

Title:
  VMWare : Disabling linked clone does not cache images on the datastore

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in The OpenStack VMwareAPI subTeam:
  Fix Released

Bug description:
  Comment from @dims on code review -
  https://review.openstack.org/#/c/37819/

  Shawn, i was going through the code again for this change and
  realized that when *not* using linked clone we don't the vmware_base
  dir at all which means we do a fresh transfer of image from glance to
  hyper every time. Would it be better to still have the vmdk in
  vmware_base and do a CopyDatastoreFile_Task or CopyVirtualDisk_Task to
  copy it over into the new directory from vmware_base for each new
  guest? In other words, we always cache vmdk(s) in vmware_base to skip
  the network transfer (and hence take the network transfer hit just one
  for the first time an image is needed in a hyper)

  Response from Shawn:
  I was dealing with a test related issue. That I finally resolved by 
re-introducing some bad code, comments are in line with the code. Can you see 
clear to view that change as out of scope of this patch? That feels like a 
separate issue from introducing new controls. I would prefer to open a separate 
bug for this since the feature to turn *off* linked-clone strategy was already 
present (as a global setting).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1207064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1160309] Re: Nova API floating IP error code inconsistent between Nova-Net and Quantum

2014-03-20 Thread Alan Pevec
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Changed in: nova/grizzly
   Status: New = Fix Committed

** Changed in: nova/grizzly
Milestone: None = 2013.1.5

** Changed in: nova/grizzly
   Importance: Undecided = High

** Changed in: nova/grizzly
 Assignee: (unassigned) = Ionuț Arțăriși (mapleoin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1160309

Title:
  Nova API floating IP error code inconsistent between Nova-Net and
  Quantum

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in Tempest:
  Fix Released

Bug description:
  If you ask for details of a floating IP address (GET.../os-
  floating-ips/id)   that is not allocated to you, then on a system
  with Nova-networking the error code is 404 itemNotFound,  whereas on a
  system with Quantum the error code is 500 computeFault.

  
  The Nova Floating IP API code (api/openstack/compute/contrib/floating_ips.py) 
 traps the NotFound exception raised by Nova-Net, but the quantum networking 
raises a QuantumClientException.

  Not clear to me if the network/quantumv2/api code can just trap that
  exception in this case and translate it to NotFound. or if we need a
  seperate exception from the quantum client


  Devstack with Nova-Net:
  

  $ curl -k -i 
http://10.2.1.79:8774/v2/7ac11f64dbf84c548f4161cf408b9799/os-floating-ips/1 -X 
GET -H X-Auth-Project-Id: demo -H User-Agent: python-novaclient -H Accept: 
application/json -H X-Auth-Token: 
  HTTP/1.1 200 OK
  X-Compute-Request-Id: req-c16fdbbe-dcda-4c3b-be46-d70a4fdade5d
  Content-Type: application/json
  Content-Length: 103
  Date: Tue, 26 Mar 2013 10:45:04 GMT

  {floating_ip: {instance_id: null, ip: 172.24.4.225,
  fixed_ip: null, id: 1, pool: nova}}

  $ nova floating-ip-delete 172.24.4.225

  $ curl -k -i 
http://10.2.1.79:8774/v2/7ac11f64dbf84c548f4161cf408b9799/os-floating-ips/1 -X 
GET -H X-Auth-Project-Id: demo -H User-Agent: python-novaclient -H Accept: 
application/json -H X-Auth-Token:  ..,.TTP/1.1 404 Not Found
  Content-Length: 76
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-61125f73-8989-4f00-9799-2d22e0ec4d51
  Date: Tue, 26 Mar 2013 10:45:28 GMT

  {itemNotFound: {message: Floating ip not found for id 1, code:
  404}}ubuntu@server-1357841265-az-3-region-a-geo-1:/mnt/devstack$


  DevStack with Quantum:
  
  $ curl -k -i 
http://10.2.2.114:8774/v2/18b18e535c6149b0bf71a42b46f2ab39/os-floating-ips/c7a3a81e-28c8-4b15-94f4-6ca55e9c437b
 -X GET -H X-Auth-Project-Id: demo -H User-Agent: python-novaclient -H 
Accept: application/json -H X-Auth-Token: ...HTTP/1.1 200 OK
  X-Compute-Request-Id: req-77b52904-6cd9-402d-93a2-124cfdcc86b2
  Content-Type: application/json
  Content-Length: 180
  Date: Tue, 26 Mar 2013 10:36:16 GMT

  {floating_ip: {instance_id: 09ffe9c9-0138-4f2f-b11b-
  c92e8d099b63, ip: 172.24.4.227, fixed_ip: 10.0.0.5, id:
  c7a3a81e-28c8-4b15-94f4-6ca55e9c437b, pool: nova}}

  
  $ nova floating-ip-delete 172.24.4.227

  $ curl -k -i
  http://10.2.2.114:8774/v2/18b18e535c6149b0bf71a42b46f2ab39/os-
  floating-ips/c7a3a81e-28c8-4b15-94f4-6ca55e9c437b -X GET -H X-Auth-
  Project-Id: demo -H User-Agent: python-novaclient -H Accept:
  application/json -H X-Auth-Token: ...

  HTTP/1.1 500 Internal Server Error
  Content-Length: 128
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-720eb948-ae3a-4837-ab95-958d70132aa5
  Date: Tue, 26 Mar 2013 10:39:09 GMT

  {computeFault: {message: The server has either erred or is
  incapable of performing the requested operation., code: 500}}


  From the API log:
  2013-03-25 19:11:00.377 DEBUG nova.api.openstack.wsgi 
[req-eda934a2-549d-4954-99b9-9dac74df01db 64090786631639 40099433467163] 
Calling method bound method FloatingIPController.show of 
nova.api.openstack.compute.contrib.floating_ips.FloatingIPController object at 
0x45c4ed0 _process_stack 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:924
  2013-03-25 19:11:00.510 ERROR nova.api.openstack 
[req-eda934a2-549d-4954-99b9-9dac74df01db 64090786631639 40099433467163] Caught 
error: Floating IP 8e9a5dfb-90f5-4fce-a82b-d814fe461d7b could not be found
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack Traceback (most recent 
call last):
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py, line 81, in 
__call__
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack application, 
catch_exc_info=False)
  

[Yahoo-eng-team] [Bug 1171284] Re: A network can't be disassociated from a project

2014-03-20 Thread Alan Pevec
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Changed in: nova/grizzly
   Status: New = Fix Committed

** Changed in: nova/grizzly
Milestone: None = 2013.1.5

** Changed in: nova/grizzly
   Importance: Undecided = Medium

** Changed in: nova/grizzly
 Assignee: (unassigned) = Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1171284

Title:
  A network can't be disassociated from a project

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed

Bug description:
  In vlan mode when I tried to disassociate a network from a project
  with  nova network-disassociate, I got the error as follows:

  $ nova network-disassociate f40cf324-15ee-42be-8e1d-b590675aafcc
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-cd1c726b-d06e-49d8-a948-24e2e453439a)

  2013-04-22 02:37:53ERROR [nova.api.openstack] Caught error: 'project'
  Traceback (most recent call last):
File /opt/stack/nova/nova/api/openstack/__init__.py, line 81, in __call__
  return req.get_response(self.application)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, 
in send
  application, catch_exc_info=False)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, 
in call_application
  app_iter = application(self.environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 451, in __call__
  return self.app(env, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 
131, in __call__
  response = self.app(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File /opt/stack/nova/nova/api/openstack/wsgi.py, line 899, in __call__
  content_type, body, accept)
File /opt/stack/nova/nova/api/openstack/wsgi.py, line 951, in 
_process_stack
  action_result = self.dispatch(meth, request, action_args)
File /opt/stack/nova/nova/api/openstack/wsgi.py, line 1030, in dispatch
  return method(req=request, **action_args)
File /opt/stack/nova/nova/api/openstack/compute/contrib/os_networks.py, 
line 77, in _disassociate_host_and_project
  self.network_api.associate(context, id, host=None, project=None)
File /opt/stack/nova/nova/network/api.py, line 90, in wrapped
  return func(self, context, *args, **kwargs)
File /opt/stack/nova/nova/network/api.py, line 366, in associate
  project = associations['project']
  KeyError: 'project'
  2013-04-22 02:37:53 INFO [nova.api.openstack] 
http://192.168.1.100:8774/v2/c619271b17564eed8fbb17570492d2d3/os-networks/f40cf324-15ee-42be-8e1d-b590675aafcc/action
 returned with HTTP 500

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1171284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1171226] Re: VMwareVCDriver: Sparse disk copy error on spawn

2014-03-20 Thread Alan Pevec
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Changed in: nova/grizzly
   Status: New = Fix Committed

** Changed in: nova/grizzly
Milestone: None = 2013.1.5

** Changed in: nova/grizzly
   Importance: Undecided = High

** Changed in: nova/grizzly
 Assignee: (unassigned) = Vui Lam (vui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1171226

Title:
  VMwareVCDriver: Sparse disk copy error on spawn

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in The OpenStack VMwareAPI subTeam:
  Fix Committed

Bug description:
  Not sure if this is a real bug, or just a case of inadequate
  documentation combining with bad error reporting.  I get an exception
  (below) when booting a VM.  The exception happens after glance is done
  streaming the disk image to VC (i.e., I see the image in the
  vmware_source folder in the DataSource) and it prevents the VM from
  actually booting.

  I tried two different ways of adding the image to glance (both as
  'ovf' and as 'bare') neither of which seemed to make a difference:

  glance add name=Ubuntu-ovf disk_format=vmdk container_format=ovf
  is_public=true vmware_adaptertype=lsiLogic
  vmware_ostype=ubuntuGuest vmware_disktype=sparse 
  ~/ubuntu12.04-sparse.vmdk

  glance add name=Ubuntu-bare disk_format=vmdk container_format=bare
  is_public=true vmware_adaptertype=lsiLogic
  vmware_ostype=ubuntuGuest vmware_disktype=sparse 
  ~/ubuntu12.04-sparse.vmdk

  In both cases, I see this exception (note: there actually seems to be
  a second exception to, perhaps due to inproper error handling with the
  first):

  2013-04-21 11:35:07ERROR [nova.compute.manager] Error: ['Traceback (most 
recent call last):\n', '  File /opt/stack/nova/nova/
  compute/manager.py, line 905, in _run_instance\n
set_access_ip=set_access_ip)\n', '  File /opt/stack/nova/nova/compute/manage
  r.py, line 1165, in _spawn\nLOG.exception(_(\'Instance failed to 
spawn\'), instance=instance)\n', '  File /usr/lib/python2.7
  /contextlib.py, line 24, in __exit__\nself.gen.next()\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 1161, in _s
  pawn\nblock_device_info)\n', '  File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 176, in spawn\n
block_device_inf
  o)\n', '  File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 398, in 
spawn\n_copy_virtual_disk()\n', '  File /opt/stac
  k/nova/nova/virt/vmwareapi/vmops.py, line 340, in _copy_virtual_disk\n
self._session._wait_for_task(instance[\'uuid\'], vmdk_c
  opy_task)\n', '  File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 
558, in _wait_for_task\nret_val = done.wait()\n', 
  '  File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116, 
in wait\nreturn hubs.get_hub().switch()\n', '  F
  ile /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, 
in switch\nreturn self.greenlet.switch()\n', 'Nov
  aException: The requested operation is not implemented by the server.\n']
  2013-04-21 11:35:07DEBUG [nova.openstack.common.rpc.amqp] Making 
synchronous call on conductor ...
  2013-04-21 11:35:07DEBUG [nova.openstack.common.rpc.amqp] MSG_ID is 
2318255c5a4f4e5783cefb3cfde9e563
  2013-04-21 11:35:07DEBUG [nova.openstack.common.rpc.amqp] UNIQUE_ID is 
f710f7acfd774af3ba1aa91515b1fd05.
  2013-04-21 11:35:10  WARNING [nova.virt.vmwareapi.driver] Task 
[CopyVirtualDisk_Task] (returnval){
 value = task-925
 _type = Task
   } status: error The requested operation is not implemented by the server.
  2013-04-21 11:35:10  WARNING [nova.virt.vmwareapi.driver] In 
vmwareapi:_poll_task, Got this error Trying to re-send() an already-triggered 
event.
  2013-04-21 11:35:10ERROR [nova.utils] in fixed duration looping call
  Traceback (most recent call last):
File /opt/stack/nova/nova/utils.py, line 595, in _inner
  self.f(*self.args, **self.kw)
File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 584, in 
_poll_task
  done.send_exception(excep)
File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 208, 
in send_exception
  return self.send(None, args)
File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 150, 
in send
  assert self._result is NOT_USED, 'Trying to re-send() an 
already-triggered event.'
  AssertionError: Trying to re-send() an already-triggered event.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1171226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1158807] Re: Qpid SSL protocol

2014-03-20 Thread Alan Pevec
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New = Invalid

** Also affects: neutron/grizzly
   Importance: Undecided
   Status: New

** Changed in: neutron/grizzly
   Status: New = Fix Committed

** Changed in: neutron/grizzly
 Assignee: (unassigned) = Xavier Queralt (xqueralt)

** Changed in: neutron/grizzly
Milestone: None = 2013.1.5

** Changed in: neutron/grizzly
   Importance: Undecided = High

** Changed in: cinder/grizzly
Milestone: None = 2013.1.5

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1158807

Title:
  Qpid SSL protocol

Status in Cinder:
  Invalid
Status in Cinder grizzly series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in neutron grizzly series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  By default, TCP is used as transport for QPID connections. If you like
  to enable SSL, there is a flat 'qpid_protocol = ssl' available in
  nova.conf. However, python-qpid client is awaiting transport type
  instead of protocol. It seems to be a bug:

  Solution:
  
(https://github.com/openstack/nova/blob/master/nova/openstack/common/rpc/impl_qpid.py#L323)

  WRONG:self.connection.protocol = self.conf.qpid_protocol
  CORRECT:self.connection.transport = self.conf.qpid_protocol

  Regards,
  JuanFra.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1158807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295381] [NEW] VMware: resize operates on orig VM and not clone

2014-03-20 Thread Eric Brown
Public bug reported:

The resize operation when using the VCenter driver ends up resizing the
original VM and not the newly cloned VM.

To recreate:
1) create a new VM from horizon using default debian image.  I use a flavor of 
nano.
2) wait for it to complete and go active
3) click on resize and choose a flavor larger than what you used originally.  i 
then usually choose a flavor of small.
4) wait for horizon to prompt you to confirm or revert the migration.
5) Switch over to vSphere Web Client.  Notice two VMs for your newly created 
instance.  One with a UUID name and the other with a UUID-orig name.  -orig 
indicating the original.
6) Notice the original has be resized (cpu and mem are increased, disk is not, 
but that's a separate bug) and not the new clone.  This is problem #1.
7) Now hit confirm in horizon.  It works, but the logs contain a warning: The 
attempted operation cannot be performed in the current state (Powered on)..  I 
suspect its attempting to destroy the orig VM, but the orig was the VM resized 
and powered on, so it fails.  This is problem #2.
Results in a leaked VM.

** Affects: nova
 Importance: High
 Assignee: Sidharth Surana (ssurana)
 Status: New


** Tags: icehouse-rc-potential vmware

** Changed in: nova
 Assignee: (unassigned) = Sidharth Surana (ssurana)

** Description changed:

  The resize operation when using the VCenter driver ends up resizing the
  original VM and not the newly cloned VM.
  
  To recreate:
  1) create a new VM from horizon using default debian image.  I use a flavor 
of nano.
  2) wait for it to complete and go active
  3) click on resize and choose a flavor larger than what you used originally.  
i then usually choose a flavor of small.
  4) wait for horizon to prompt you to confirm or revert the migration.
- 5) Switch over to vSphere Web Client.  Notice two VMs for your newly created 
instance.  One with a UUID name and the other with a UUID-orig name.  -orig 
indicating the original.  
+ 5) Switch over to vSphere Web Client.  Notice two VMs for your newly created 
instance.  One with a UUID name and the other with a UUID-orig name.  -orig 
indicating the original.
  6) Notice the original has be resized (cpu and mem are increased, disk is 
not, but that's a separate bug) and not the new clone.  This is problem #1.
  7) Now hit confirm in horizon.  It works, but the logs contain a warning: 
The attempted operation cannot be performed in the current state (Powered 
on)..  I suspect its attempting to destroy the orig VM, but the orig was the 
VM resized and powered on, so it fails.  This is problem #2.
+ Results in a leaked VM.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1295381

Title:
  VMware: resize operates on orig VM and not clone

Status in OpenStack Compute (Nova):
  New

Bug description:
  The resize operation when using the VCenter driver ends up resizing
  the original VM and not the newly cloned VM.

  To recreate:
  1) create a new VM from horizon using default debian image.  I use a flavor 
of nano.
  2) wait for it to complete and go active
  3) click on resize and choose a flavor larger than what you used originally.  
i then usually choose a flavor of small.
  4) wait for horizon to prompt you to confirm or revert the migration.
  5) Switch over to vSphere Web Client.  Notice two VMs for your newly created 
instance.  One with a UUID name and the other with a UUID-orig name.  -orig 
indicating the original.
  6) Notice the original has be resized (cpu and mem are increased, disk is 
not, but that's a separate bug) and not the new clone.  This is problem #1.
  7) Now hit confirm in horizon.  It works, but the logs contain a warning: 
The attempted operation cannot be performed in the current state (Powered 
on)..  I suspect its attempting to destroy the orig VM, but the orig was the 
VM resized and powered on, so it fails.  This is problem #2.
  Results in a leaked VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1295381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260423] Re: Email shouldn't be a mandatory attribute

2014-03-20 Thread Alan Pevec
** Tags removed: in-stable-grizzly

** Also affects: horizon/grizzly
   Importance: Undecided
   Status: New

** Changed in: horizon/grizzly
   Status: New = Fix Committed

** Changed in: horizon/grizzly
Milestone: None = 2013.1.5

** Changed in: horizon/grizzly
   Importance: Undecided = Medium

** Changed in: horizon/grizzly
 Assignee: (unassigned) = Bernhard M. Wiedemann (ubuntubmw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260423

Title:
  Email shouldn't be a mandatory attribute

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) grizzly series:
  Fix Committed
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  When using a LDAP backend, it's possible that a user won't have the
  email attribute defined, however it should still be possible to edit
  the other fields.

  Steps to reproduce (in an environment with keystone using a LDAP backend):
  1. Log in as admin
  2. Go to the Users dashboard
  3. Select a user that doesn't have an email defined

  Expected result:
  4. Edit user modal opens

  Actual result:
  4. Error 500

  Traceback:
  File /usr/lib/python2.7/site-packages/django/views/generic/edit.py in get
154. form = self.get_form(form_class)
  File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/forms/views.py in 
get_form
82. return form_class(self.request, **self.get_form_kwargs())
  File /usr/lib/python2.7/site-packages/django/views/generic/edit.py in 
get_form_kwargs
41. kwargs = {'initial': self.get_initial()}
  File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/users/views.py
 in get_initial
103. 'email': user.email}
  File /opt/stack/python-keystoneclient/keystoneclient/base.py in __getattr__
425. raise AttributeError(k)

  Exception Type: AttributeError at 
/admin/users/e005aa43475b403c8babdff86ea27c37/update/
  Exception Value: email

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211338] Re: Direct vs. direct in impl_qpid

2014-03-20 Thread Alan Pevec
** Changed in: neutron/grizzly
   Importance: Undecided = High

** Changed in: neutron/grizzly
 Assignee: (unassigned) = Assaf Muller (amuller)

** Changed in: nova/grizzly
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1211338

Title:
  Direct vs. direct in impl_qpid

Status in Cinder:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Won't Fix
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  impl_qpid.py has {type: Direct} (with a capital D) in one place.
  direct (lowercase) in others.  It appears that qpid is case-
  sensitive about exchange types, so the version with the capital D is
  invalid.  This ends up causing qpid to throw an error like:

   /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py,
   line 567, in _ewait\nself.check_error()\n', '  File
   /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py,
   line 556, in check_error\nraise self.error\n', 'NotFound:
   not-found: Exchange type not implemented: Direct
   (qpid/broker/SessionAdapter.cpp:117)(404)\n']

  It should be a one-character fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1211338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294132] Re: Volume status set to error extending when new size exceeds quota

2014-03-20 Thread John Griffith
So I ran some tests on this, as long as the backend doesn't fail to do
the extend the quota is checked up front and the API responds with an
error before ever changing state or attempting the resize.

This is what I would expect.  If the cmd passes quota check and is sent
to the driver, but the driver fails (ie not enough space) then it raises
the error I think you're seeing and sets the status to error.  This is
what I would expect and I think is appropriate behavior.  It's in line
with how all of the other async calls work.

** Changed in: cinder
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1294132

Title:
  Volume status set to error extending when new size exceeds quota

Status in Cinder:
  Invalid
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  extend_volume in cinder.volume.manager should not set the status to
  error_extending when the quota was exceeded. The status should still
  be available

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1294132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179348] Re: Testcase 'test_create_subnet_with_two_host_routes' failed

2014-03-20 Thread Alan Pevec
** Also affects: neutron/grizzly
   Importance: Undecided
   Status: New

** Changed in: neutron/grizzly
   Status: New = Fix Committed

** Changed in: neutron/grizzly
Milestone: None = 2013.1.5

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1179348

Title:
  Testcase 'test_create_subnet_with_two_host_routes' failed

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Committed

Bug description:
  Traceback (most recent call last):
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 3286, in test_create_subnet_with_two_host_routes
  host_routes=host_routes)
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 2301, in _test_create_subnet
  self.assertEqual(subnet['subnet'][k], keys[k])
  MismatchError: !=:
  reference = [{'destination': '12.0.0.0/8', 'nexthop': '4.3.2.1'},
   {'destination': '135.207.0.0/16', 'nexthop': '1.2.3.4'}]
  actual= [{'destination': '135.207.0.0/16', 'nexthop': '1.2.3.4'},
   {'destination': '12.0.0.0/8', 'nexthop': '4.3.2.1'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1179348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252806] Re: unable to add allow all ingress traffic security group rule

2014-03-20 Thread Alan Pevec
** Also affects: neutron/grizzly
   Importance: Undecided
   Status: New

** Changed in: neutron/grizzly
   Status: New = Fix Committed

** Changed in: neutron/grizzly
Milestone: None = 2013.1.5

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1252806

Title:
  unable to add allow all ingress traffic security group rule

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Committed
Status in neutron havana series:
  Fix Released

Bug description:
  The following rule is unable to be installed:

  $ neutron security-group-rule-create --direction ingress default
  409-{u'NeutronError': {u'message': u'Security group rule already exists. 
Group id is 29dc1837-75d3-457a-8a90-14f4b6ea6db9.', u'type': 
u'SecurityGroupRuleExists', u'detail': u''}}

  
  The reason for this is when the db query is done it passes this in as a 
filter: 

  {'tenant_id': [u'577a2f0c78fb4e36b76902977a5c1708'], 'direction':
  [u'ingress'], 'ethertype': ['IPv4'], 'security_group_id':
  [u'0fb10163-81b2-4538-bd11-dbbd3878db51']}

  
  and the remote_group_id is wild carded thus it matches this rule: 

  [ {'direction': u'ingress',
'ethertype': u'IPv4',
'id': u'8d5c3429-f4ef-4258-8140-5ff3247f9dd6',
'port_range_max': None,
'port_range_min': None,
'protocol': None,
'remote_group_id': None,
'remote_ip_prefix': None,
'security_group_id': u'0fb10163-81b2-4538-bd11-dbbd3878db51',
'tenant_id': u'577a2f0c78fb4e36b76902977a5c1708'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1252806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179370] Re: Testcase 'test_db_plugin.TestPortsV2.test_range_allocation' failed random

2014-03-20 Thread Alan Pevec
** Also affects: neutron/grizzly
   Importance: Undecided
   Status: New

** Changed in: neutron/grizzly
   Status: New = Fix Committed

** Changed in: neutron/grizzly
Milestone: None = 2013.1.5

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1179370

Title:
  Testcase 'test_db_plugin.TestPortsV2.test_range_allocation' failed
  random

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Committed

Bug description:
  2013-05-13 12:04:53,577ERROR [quantum.api.v2.resource] delete failed
  Traceback (most recent call last):
File /home/soulxu/work-code/openstack/quantum/quantum/api/v2/resource.py, 
line 82, in resource
  result = method(request=request, **args)
File /home/soulxu/work-code/openstack/quantum/quantum/api/v2/base.py, 
line 407, in delete
  obj_deleter(request.context, id, **kwargs)
File 
/home/soulxu/work-code/openstack/quantum/quantum/db/db_base_plugin_v2.py, 
line 1263, in delete_subnet
  raise q_exc.SubnetInUse(subnet_id=id)
  SubnetInUse: Unable to complete operation on subnet 
3f0b8b59-1084-4e59-bf0c-56a0eefb24d6. One or more ports have an IP allocation 
from this subnet.
  2013-05-13 12:04:53,587ERROR [quantum.api.v2.resource] delete failed
  Traceback (most recent call last):
File /home/soulxu/work-code/openstack/quantum/quantum/api/v2/resource.py, 
line 82, in resource
  result = method(request=request, **args)
File /home/soulxu/work-code/openstack/quantum/quantum/api/v2/base.py, 
line 407, in delete
  obj_deleter(request.context, id, **kwargs)
File 
/home/soulxu/work-code/openstack/quantum/quantum/db/db_base_plugin_v2.py, 
line 1021, in delete_network
  raise q_exc.NetworkInUse(net_id=id)
  NetworkInUse: Unable to complete operation on network 
276b6fb1-158f-4b30-ac19-6e959e9147d1. There are one or more ports still in use 
on the network.
  }}}

  Traceback (most recent call last):
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 1407, in test_range_allocation
  print res
File /usr/lib/python2.7/contextlib.py, line 35, in __exit__
  self.gen.throw(type, value, traceback)
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 566, in subnet
  self._delete('subnets', subnet['subnet']['id'])
File /usr/lib/python2.7/contextlib.py, line 35, in __exit__
  self.gen.throw(type, value, traceback)
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 537, in network
  self._delete('networks', network['network']['id'])
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 455, in _delete
  self.assertEqual(res.status_int, expected_code)
  MismatchError: 409 != 204

  
  --
  Ran 1 test in 0.220s

  FAILED (failures=1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1179370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211338] Re: Direct vs. direct in impl_qpid

2014-03-20 Thread Alan Pevec
** Also affects: neutron/grizzly
   Importance: Undecided
   Status: New

** Changed in: neutron/grizzly
   Status: New = Fix Committed

** Changed in: neutron/grizzly
Milestone: None = 2013.1.5

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1211338

Title:
  Direct vs. direct in impl_qpid

Status in Cinder:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  impl_qpid.py has {type: Direct} (with a capital D) in one place.
  direct (lowercase) in others.  It appears that qpid is case-
  sensitive about exchange types, so the version with the capital D is
  invalid.  This ends up causing qpid to throw an error like:

   /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py,
   line 567, in _ewait\nself.check_error()\n', '  File
   /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py,
   line 556, in check_error\nraise self.error\n', 'NotFound:
   not-found: Exchange type not implemented: Direct
   (qpid/broker/SessionAdapter.cpp:117)(404)\n']

  It should be a one-character fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1211338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1209134] Re: instance consoleauth expired tokens need to be removed from the cache

2014-03-20 Thread Alan Pevec
** Tags removed: in-stable-grizzly

** Changed in: nova
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1209134

Title:
  instance consoleauth  expired tokens need to be removed from the cache

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released

Bug description:
  instance consoleauth tokens are stored in memory cache or memcached,
  the key is instance uuid and values are all tokens, before the
  instance is deleted,  the tokens aren't deleted  even through  tokens
  are expired . therefore, there is the possibility that the value reach
  the max limit when using memcached

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1209134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277790] Re: boto 2.25 causing unit tests to fail

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277790

Title:
  boto 2.25 causing unit tests to fail

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  A new version of boto was released (2.25) that causes nova unit tests
  to fail.

  http://logs.openstack.org/03/71503/1/gate/gate-nova-
  python27/4e66adf/console.html

  
  FAIL: nova.tests.test_objectstore.S3APITestCase.test_unknown_bucket
  tags: worker-1
  --
  Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  INFO [nova.wsgi] S3 Objectstore listening on 127.0.0.1:59755
  INFO [nova.S3 Objectstore.wsgi.server] (7108) wsgi starting up on 
http://127.0.0.1:59755/
  INFO [nova.S3 Objectstore.wsgi.server] 127.0.0.1 HEAD /falalala/ HTTP/1.1 
status: 200 len: 115 time: 0.0005140
  INFO [nova.wsgi] Stopping WSGI server.
  }}}

  Traceback (most recent call last):
File nova/tests/test_objectstore.py, line 133, in test_unknown_bucket
  bucket_name)
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 393, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 406, in assertThat
  raise mismatch_error
  MismatchError: bound method S3Connection.get_bucket of 
S3Connection:127.0.0.1 returned Bucket: falalala

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273455] Re: stevedore 0.14 changes _load_plugins parameter list, mocking breaks

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273455

Title:
  stevedore 0.14 changes _load_plugins parameter list, mocking breaks

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Ironic (Bare Metal Provisioning):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released
Status in Manage plugins for Python applications:
  Fix Released

Bug description:
  In stevedore 0.14 the signature on _load_plugins changed. It now takes
  an extra parameter. The nova and ceilometer unit tests mocked to the
  old signature, which is causing breaks in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1273455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1206081] Re: [OSSA 2013-029] Unchecked qcow2 root disk sizes DoS

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1206081

Title:
  [OSSA 2013-029] Unchecked qcow2 root disk sizes DoS

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Won't Fix
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  When doing QA for SUSE on bug 1177830
  I found that the fix is incomplete,
  because it assumed that the cached image would be mostly sparse.

  However, I can easily create non-sparse small compressed qcow2 images
  with

  perl -e 'for(1..11000){print x x 1024000}'  img
  qemu-img convert -c -O qcow2 img img.qcow2
  glance image-create --name=11gb --is-public=True --disk-format=qcow2 
--container-format=bare  img.qcow2
  nova boot --image 11gb --flavor m1.small testvm

  which (in Grizzly and Essex) results in one (or two in Essex) 11GB large 
files being created in /var/lib/nova/instances/_base/
  still allowing attackers to fill up disk space of compute nodes
  because the size check is only done after the uncompressing / caching

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1206081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177830] Re: [OSSA 2013-012] Unchecked qcow2 root disk sizes

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1177830

Title:
  [OSSA 2013-012] Unchecked qcow2 root disk sizes

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) folsom series:
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Currently there's no check on the root disk raw sizes. A user can
  create qcow2 images with any size and upload it to glance and spawn
  instances off this file. The raw backing file created in the compute
  node will be small at first due to it being a sparse file, but will
  grow as data is written to it. This can cause the following issues.

  1. Bypass storage quota restrictions
  2. Overrun compute host disk space

  This was reproduced in Devstack using recent trunk d7e4692.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1177830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284666] Re: test_reclaim_queued_deletes race fail on stable/grizzly

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284666

Title:
  test_reclaim_queued_deletes race fail on stable/grizzly

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) grizzly series:
  Fix Released

Bug description:
  http://logs.openstack.org/20/76020/1/check/gate-nova-
  python27/0109f3d/console.html

  The test was backported from havana here:
  https://review.openstack.org/#/c/33822/

  The test fail:  http://paste.openstack.org/show/69434/

  There is probably some other fix for the race in havana or icehouse
  that changed nova.compute.manager._reclaim_queued_deletes so I'm
  looking for that now, maybe something that needs to be backported to
  stable/grizzly also.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268631] Re: Unit tests failing with raise UnknownMethodCallError('management_url')

2014-03-20 Thread Alan Pevec
** Changed in: horizon/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1268631

Title:
  Unit tests failing with raise UnknownMethodCallError('management_url')

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) grizzly series:
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  A number of unit tests are failing for every review, likely related to
  the release of keystoneclient 0.4.2:

  fungi i think python-keystoneclient==0.4.2 may have just broken horizon
  fungi looks like all python unit test runs for horizon are now failing on 
keystone-specific tests as of the last few minutes, and the only change in the 
pip freeze output for the tests is python-keystoneclient==0.4.2 instead of 0.4.1
  bknudson fungi: UnknownMethodCallError: Method called is not a member of 
the object: management_url ?
  fungi horizon will presumably need patching to work around that
  bknudson Looks like the horizon test is trying to create a mock 
keystoneclient and creating the mock fails for some reason.

  
  2014-01-13 14:42:38.747 | 
==
  2014-01-13 14:42:38.747 | FAIL: test_get_default_role 
(openstack_dashboard.test.api_tests.keystone_tests.RoleAPITests)
  2014-01-13 14:42:38.748 | 
--
  2014-01-13 14:42:38.748 | Traceback (most recent call last):
  2014-01-13 14:42:38.748 |   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/api_tests/keystone_tests.py,
 line 77, in test_get_default_role
  2014-01-13 14:42:38.748 | keystoneclient = self.stub_keystoneclient()
  2014-01-13 14:42:38.748 |   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py,
 line 306, in stub_keystoneclient
  2014-01-13 14:42:38.748 | self.keystoneclient = 
self.mox.CreateMock(keystone_client.Client)
  2014-01-13 14:42:38.748 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 258, in CreateMock
  2014-01-13 14:42:38.748 | new_mock = MockObject(class_to_mock, 
attrs=attrs)
  2014-01-13 14:42:38.748 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 556, in __init__
  2014-01-13 14:42:38.749 | attr = getattr(class_to_mock, method)
  2014-01-13 14:42:38.749 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 608, in __getattr__
  2014-01-13 14:42:38.749 | raise UnknownMethodCallError(name)
  2014-01-13 14:42:38.749 | UnknownMethodCallError: Method called is not a 
member of the object: management_url
  2014-01-13 14:42:38.749 |   raise UnknownMethodCallError('management_url')
  2014-01-13 14:42:38.749 | 
  2014-01-13 14:42:38.749 | 
  2014-01-13 14:42:38.749 | 
==
  2014-01-13 14:42:38.749 | FAIL: Tests api.keystone.remove_tenant_user
  2014-01-13 14:42:38.749 | 
--
  2014-01-13 14:42:38.750 | Traceback (most recent call last):
  2014-01-13 14:42:38.750 |   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/api_tests/keystone_tests.py,
 line 61, in test_remove_tenant_user
  2014-01-13 14:42:38.750 | keystoneclient = self.stub_keystoneclient()
  2014-01-13 14:42:38.750 |   File 
/home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py,
 line 306, in stub_keystoneclient
  2014-01-13 14:42:38.750 | self.keystoneclient = 
self.mox.CreateMock(keystone_client.Client)
  2014-01-13 14:42:38.750 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 258, in CreateMock
  2014-01-13 14:42:38.750 | new_mock = MockObject(class_to_mock, 
attrs=attrs)
  2014-01-13 14:42:38.750 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 556, in __init__
  2014-01-13 14:42:38.750 | attr = getattr(class_to_mock, method)
  2014-01-13 14:42:38.750 |   File 
/home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py,
 line 608, in __getattr__
  2014-01-13 14:42:38.750 | raise UnknownMethodCallError(name)
  2014-01-13 14:42:38.751 | UnknownMethodCallError: Method called is not a 
member of the object: management_url
  2014-01-13 14:42:38.751 |   raise UnknownMethodCallError('management_url')

  
  Examples:
  https://jenkins04.openstack.org/job/gate-horizon-python27/18/console
  

[Yahoo-eng-team] [Bug 1279907] Re: Latest keystoneclient breaks tests

2014-03-20 Thread Alan Pevec
** Changed in: horizon/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1279907

Title:
  Latest keystoneclient breaks tests

Status in Orchestration API (Heat):
  Fix Released
Status in heat havana series:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) grizzly series:
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Committed

Bug description:
  The new release of keystoneclient (0.6.0) introduces some new
  metaclass magic which breaks our mocking in master :(

  We probably need to modify the test to use mock instead of mox, as the
  issue seems to be that mox misinterprets the class type due to the
  metaclass.

  Immediate workaround while we workout the solution is probably to
  temporarily cap keystoneclient to 0.5.1 which did not have this issue.

  Traceback (most recent call last):
File /home/shardy/git/heat/heat/tests/test_heatclient.py, line 449, in 
test_trust_init
  self._stubs_v3(method='trust')
File /home/shardy/git/heat/heat/tests/test_heatclient.py, line 83, in 
_stubs_v3
  self.m.StubOutClassWithMocks(kc_v3, Client)
File /usr/lib/python2.7/site-packages/mox.py, line 366, in 
StubOutClassWithMocks
  raise TypeError('Given attr is not a Class.  Use StubOutWithMock.')
  TypeError: Given attr is not a Class.  Use StubOutWithMock.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1279907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260080] Re: [OSSA 2014-006] Trustee token revocations with memcache backend (CVE-2014-2237)

2014-03-20 Thread Alan Pevec
** Changed in: keystone/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1260080

Title:
  [OSSA 2014-006] Trustee token revocations with memcache backend
  (CVE-2014-2237)

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone grizzly series:
  Fix Released
Status in Keystone havana series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  When using the memcache token backend, In cases where token
  revocations (bulk revocations) that rely on the trustee's token-index-
  list to perform the revocations - the revocations cannot occur.  This
  is because trustee tokens are added only to the trustor's token list.
  This appears to only be an issue for trusts with impersonation
  enabled.  This is most noticeable when the trustee user is disabled or
  the trustee changes a password.  I am sure there are other related
  scenarios.

  Example reproduction -

  Create the trust:

  curl -H X-Auth-Token: $TOKEN -X POST http://127.0.0.1:35357/v3/OS-
  TRUST/trusts -d '{trust:{impersonation: true, project_id:
  9b2c18a72c8a477b891b48931c26cebe, roles:
  
[{name:admin}],trustee_user_id:36240e5f6ed04857bf15d08a519b6e37,trustor_user_id:46c9e36c50c14dfab36b6511d6ebedd4}}'
  -H 'Content-Type: application/json'

  RESPONSE:
  {
  trust: {
  expires_at: null,
  id: 0d2dc361043d4f9d8a84a7f253e20924,
  impersonation: true,
  links: {
  self: 
http://172.16.30.195:5000/v3/OS-TRUST/trusts/0d2dc361043d4f9d8a84a7f253e20924;
  },
  project_id: 9b2c18a72c8a477b891b48931c26cebe,
  roles: [
  {
  id: 0bd1d61badd742bebc044dc246a43513,
  links: {
  self: 
http://172.16.30.195:5000/v3/roles/0bd1d61badd742bebc044dc246a43513;
  },
  name: admin
  }
  ],
  roles_links: {
  next: null,
  previous: null,
  self: 
http://172.16.30.195:5000/v3/OS-TRUST/trusts/0d2dc361043d4f9d8a84a7f253e20924/roles;
  },
  trustee_user_id: 36240e5f6ed04857bf15d08a519b6e37,
  trustor_user_id: 46c9e36c50c14dfab36b6511d6ebedd4
  }
  }

  Consume the trust:

  vagrant@precise64:~/devstack$ cat trust_token.json
  {
  auth: {
  identity: {
  methods: [
  token
  ],
  token: {
  id: PKI TOKEN ID
  }
  },
  scope: {
  OS-TRUST:trust: {
  id: 0d2dc361043d4f9d8a84a7f253e20924
  }
  }
  }
  }

  curl -si -d @trust_token.json -H 'Content-Type: application/json' -X
  POST http://localhost:35357/v3/auth/tokens

  RESPONSE:

  {
  token: {
  OS-TRUST:trust: {
  id: 0d2dc361043d4f9d8a84a7f253e20924,
  impersonation: true,
  trustee_user: {
  id: 36240e5f6ed04857bf15d08a519b6e37
  },
  trustor_user: {
  id: 46c9e36c50c14dfab36b6511d6ebedd4
  }
  },
  catalog: [
     ... SNIP FOR BREVITY
  ],
  expires_at: 2013-12-12T20:06:00.239812Z,
  extras: {},
  issued_at: 2013-12-11T20:10:22.224381Z,
  methods: [
  token,
  password
  ],
  project: {
  domain: {
  id: default,
  name: Default
  },
  id: 9b2c18a72c8a477b891b48931c26cebe,
  name: admin
  },
  roles: [
  {
  id: 0bd1d61badd742bebc044dc246a43513,
  name: admin
  }
  ],
  user: {
  domain: {
  id: default,
  name: Default
  },
  id: 46c9e36c50c14dfab36b6511d6ebedd4,
  name: admin
  }
  }
  }

  Check the memcache token lists, the admin user should have 1 token
  (used to create the trust), the demo user should have 2 tokens
  (initial token, and trust token)

  ADMIN-ID: 46c9e36c50c14dfab36b6511d6ebedd4
  DEMO-ID: 36240e5f6ed04857bf15d08a519b6e37

  vagrant@precise64:~/devstack$ telnet localhost 11211
  Trying 127.0.0.1...
  Connected to localhost.
  Escape character is '^]'.
  get usertokens-46c9e36c50c14dfab36b6511d6ebedd4
  VALUE usertokens-46c9e36c50c14dfab36b6511d6ebedd4 0 104
  1bb6a8abc42e1d1a944e08a20de20a91,ee512379a66027733001648799083349
  END
  get usertokens-36240e5f6ed04857bf15d08a519b6e37
  VALUE usertokens-36240e5f6ed04857bf15d08a519b6e37 0 34
  36df008ec5b5d3dd6983c8fe84d407f6

  This does not affect the SQL or KVS backends.  This will affect Master
  (until KVS refactor is complete), Havana, and Grizzly. 

[Yahoo-eng-team] [Bug 1251590] Re: [OSSA 2014-003] Live migration can leak root disk into ephemeral storage (CVE-2013-7130)

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251590

Title:
  [OSSA 2014-003] Live migration can leak root disk into ephemeral
  storage (CVE-2013-7130)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  During pre-live-migration required disks are created along with their
  backing files (if they don't already exist). However, the ephemeral
  backing file is created from a glance downloaded root disk.

  # If the required ephemeral backing file is present then there's no
  issue.

  # If the required ephemeral backing file is not already present, then
  the root disk is downloaded and saved as the ephemeral backing file.
  This will result in the following situations:

  ## The disk.local transferred during live-migration will be rebased on the 
ephemeral backing file so regardless of the content, the end result will be 
identical to the source disk.local.
  ## However, if a new instance of the same flavor is spawned on this compute 
node, then it will have an ephemeral storage that exposes a root disk.

  Security concerns:

  If the migrated VM was spawned off a snapshot, now it's possible for
  any instances of the correct flavor to see the snapshot contents of
  another user via the ephemeral storage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1251590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253980] Re: [OSSA 2013-037] DoS attack via setting os_type in snapshots (CVE-2013-6437)

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253980

Title:
  [OSSA 2013-037] DoS attack via setting os_type in snapshots
  (CVE-2013-6437)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  If the os_type metadata is set of an image, the ephemeral disk backing file 
for that image will be named ephemeral_[size]_[os_type].
  Because the user can change os_type they can use this to create new ephemeral 
backing files.
  Nova image cache management does not include deleting ephemeral backing files 
(presumably because they are expected to be a small, stable set.

  Hence a user can fill the disk with ephemeral backing files via the
  following:

  1) Spawn a instance
  2) Create a snapshot from it, delete the original instance
  3) In a loop:
  generate a random os_type
  set os_type to the snapshot
  spawn and instance from it, and then delete the instance

  Every iteration will generate an ephemeral backing file on a compute
  host.  With a stacking scheduling policy there is a good chance of
  hitting the same host repeatedly until its disk is full.

  Suggested mitigation

  Only use “os_type” in the ephemeral file name if there is a specific
  mkfs command defined, otherwise use “default”   (Currently for
  undefined os-types it will use the default mkfs command, but still
  uses os_type in the name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1253980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252806] Re: unable to add allow all ingress traffic security group rule

2014-03-20 Thread Alan Pevec
** Changed in: neutron/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1252806

Title:
  unable to add allow all ingress traffic security group rule

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  The following rule is unable to be installed:

  $ neutron security-group-rule-create --direction ingress default
  409-{u'NeutronError': {u'message': u'Security group rule already exists. 
Group id is 29dc1837-75d3-457a-8a90-14f4b6ea6db9.', u'type': 
u'SecurityGroupRuleExists', u'detail': u''}}

  
  The reason for this is when the db query is done it passes this in as a 
filter: 

  {'tenant_id': [u'577a2f0c78fb4e36b76902977a5c1708'], 'direction':
  [u'ingress'], 'ethertype': ['IPv4'], 'security_group_id':
  [u'0fb10163-81b2-4538-bd11-dbbd3878db51']}

  
  and the remote_group_id is wild carded thus it matches this rule: 

  [ {'direction': u'ingress',
'ethertype': u'IPv4',
'id': u'8d5c3429-f4ef-4258-8140-5ff3247f9dd6',
'port_range_max': None,
'port_range_min': None,
'protocol': None,
'remote_group_id': None,
'remote_ip_prefix': None,
'security_group_id': u'0fb10163-81b2-4538-bd11-dbbd3878db51',
'tenant_id': u'577a2f0c78fb4e36b76902977a5c1708'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1252806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260423] Re: Email shouldn't be a mandatory attribute

2014-03-20 Thread Alan Pevec
** Changed in: horizon/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260423

Title:
  Email shouldn't be a mandatory attribute

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) grizzly series:
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  When using a LDAP backend, it's possible that a user won't have the
  email attribute defined, however it should still be possible to edit
  the other fields.

  Steps to reproduce (in an environment with keystone using a LDAP backend):
  1. Log in as admin
  2. Go to the Users dashboard
  3. Select a user that doesn't have an email defined

  Expected result:
  4. Edit user modal opens

  Actual result:
  4. Error 500

  Traceback:
  File /usr/lib/python2.7/site-packages/django/views/generic/edit.py in get
154. form = self.get_form(form_class)
  File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/forms/views.py in 
get_form
82. return form_class(self.request, **self.get_form_kwargs())
  File /usr/lib/python2.7/site-packages/django/views/generic/edit.py in 
get_form_kwargs
41. kwargs = {'initial': self.get_initial()}
  File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/users/views.py
 in get_initial
103. 'email': user.email}
  File /opt/stack/python-keystoneclient/keystoneclient/base.py in __getattr__
425. raise AttributeError(k)

  Exception Type: AttributeError at 
/admin/users/e005aa43475b403c8babdff86ea27c37/update/
  Exception Value: email

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242855] Re: [OSSA 2013-028] Removing role adds role with LDAP backend

2014-03-20 Thread Alan Pevec
** Changed in: keystone/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1242855

Title:
  [OSSA 2013-028] Removing role adds role with LDAP backend

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone grizzly series:
  Fix Released
Status in Keystone havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Using the LDAP assignment backend, if you attempt to remove a role
  from a user on a tenant and the user doesn't have that role on the
  tenant then the user is actually granted the role on the tenant. Also,
  the role must not have been granted to anyone on the tenant before.

  To recreate

  0) Start with devstack, configured with LDAP (note especially to set
  KEYSTONE_ASSIGNMENT_BACKEND):

  In localrc,
   enable_service ldap
   KEYSTONE_IDENTITY_BACKEND=ldap
   KEYSTONE_ASSIGNMENT_BACKEND=ldap

  1) set up environment with OS_USERNAME=admin

  export OS_USERNAME=admin
  ...

  2) Create a new user, give admin role, list roles:

  $ keystone user-create --name blktest1 --pass blkpwd
  +--+--+
  | Property |  Value   |
  +--+--+
  |  email   |  |
  | enabled  |   True   |
  |id| 3b71182dc36e45c6be4733d508201694 |
  |   name   | blktest1 |
  +--+--+

  $ keystone user-role-add --user blktest1 --role admin --tenant service
  (no output)

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+---+--+--+
  |id|  name | user_id  
|tenant_id |
  
+--+---+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b | admin | 3b71182dc36e45c6be4733d508201694 
| 5b0af1d5013746b286b0d650da73be57 |
  
+--+---+--+--+

  3) Remove a role from that user that they don't have (using otherrole
  here since devstack sets it up):

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name
  service user-role-remove --user blktest1 --role anotherrole --tenant
  service

  - Expected to fail with 404, but it doesn't!

  4) List roles as that user:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+-+--+--+
  |id| name| user_id
  |tenant_id |
  
+--+-+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b |admin| 
3b71182dc36e45c6be4733d508201694 | 5b0af1d5013746b286b0d650da73be57 |
  | afe23e7955704ccfad803b4a104b28a7 | anotherrole | 
3b71182dc36e45c6be4733d508201694 | 5b0af1d5013746b286b0d650da73be57 |
  
+--+-+--+--+

  - Expected to not include the role that was just removed!

  5) Remove the role again:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name
  service user-role-remove --user blktest1 --role anotherrole --tenant
  service

  - No errors, which I guess is expected since list just said they had
  the role...

  6) List roles, and now it's gone:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-list
  
+--+---+--+--+
  |id|  name | user_id  
|tenant_id |
  
+--+---+--+--+
  | 1c39fab0fa9a4a68b307e7ce1535c62b | admin | 3b71182dc36e45c6be4733d508201694 
| 5b0af1d5013746b286b0d650da73be57 |
  
+--+---+--+--+

  7) Remove role again:

  $ keystone --os-user=blktest1 --os-pass=blkpwd --os-tenant-name service 
user-role-remove --user blktest1 --role anotherrole --tenant service
  Could not find user, 3b71182dc36e45c6be4733d508201694. (HTTP 404)

  - Strangely says user not found rather than role not assigned.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1235450] Re: [OSSA 2013-033] Metadata queries from Neutron to Nova are not restricted by tenant (CVE-2013-6419)

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

** Changed in: neutron/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235450

Title:
  [OSSA 2013-033] Metadata queries from Neutron to Nova are not
  restricted by tenant (CVE-2013-6419)

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Released
Status in neutron havana series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  The neutron metadata service works in the following way:

  Instance makes a GET request to http://169.254.169.254/

  This is directed to the metadata-agent which knows which
  router(namespace) he is running on and determines the ip_address from
  the http request he receives.

  Now, the neturon-metadata-agent queries neutron-server  using the
  router_id and ip_address from the request to determine the port the
  request came from. Next, the agent takes the device_id (nova-instance-
  id) on the port and passes that to nova as X-Instance-ID.

  The vulnerability is that if someone exposes their instance_id their
  metadata can be retrieved. In order to exploit this, one would need to
  update the device_id  on a port to match the instance_id they want to
  hijack the data from.

  To demonstrate:

  arosen@arosen-desktop:~/devstack$ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 1eb33bf1-6400-483a-9747-e19168b68933 | vm1  | ACTIVE | None   | Running 
| private=10.0.0.4 |
  | eed973e2-58ea-42c4-858d-582ff6ac3a51 | vm2  | ACTIVE | None   | Running 
| private=10.0.0.3 |
  
+--+--+++-+--+

  
  arosen@arosen-desktop:~/devstack$ neutron port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 3128f195-c41b-4160-9a42-40e024771323 |  | fa:16:3e:7d:a5:df | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.1} 
|
  | 62465157-8494-4fb7-bdce-2b8697f03c12 |  | fa:16:3e:94:62:47 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.4} 
|
  | 8473fb8d-b649-4281-b03a-06febf61b400 |  | fa:16:3e:4f:a3:b0 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.2} 
|
  | 92c42c1a-efb0-46a6-89eb-a38ae170d76d |  | fa:16:3e:de:9a:39 | 
{subnet_id: d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.3} 
|
  
+--+--+---+-+

  
  arosen@arosen-desktop:~/devstack$ neutron port-show  
62465157-8494-4fb7-bdce-2b8697f03c12
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | device_id | 1eb33bf1-6400-483a-9747-e19168b68933
|
  | device_owner  | compute:None
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {subnet_id: 
d5cbaa98-ecf0-495c-b009-b5ea6160259b, ip_address: 10.0.0.4} |
  | id| 62465157-8494-4fb7-bdce-2b8697f03c12
|
  | mac_address   | fa:16:3e:94:62:47   
|
  | name  | 
 

[Yahoo-eng-team] [Bug 1242501] Re: Jenkins failed due to TestGlanceAPI.test_get_details_filter_changes_since

2014-03-20 Thread Alan Pevec
** Changed in: glance/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1242501

Title:
  Jenkins failed due to
  TestGlanceAPI.test_get_details_filter_changes_since

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance grizzly series:
  Fix Released
Status in Glance havana series:
  Fix Released

Bug description:
  Now we're running into the Jenkins failure due to below test case
  failure:

  2013-10-20 06:12:31.930 | 
==
  2013-10-20 06:12:31.930 | FAIL: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_get_details_filter_changes_since
  2013-10-20 06:12:31.930 | 
--
  2013-10-20 06:12:31.930 | _StringException: Traceback (most recent call last):
  2013-10-20 06:12:31.931 |   File 
/home/jenkins/workspace/gate-glance-python27/glance/tests/unit/v1/test_api.py,
 line 1358, in test_get_details_filter_changes_since
  2013-10-20 06:12:31.931 | self.assertEquals(res.status_int, 400)
  2013-10-20 06:12:31.931 |   File 
/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 322, in assertEqual
  2013-10-20 06:12:31.931 | self.assertThat(observed, matcher, message)
  2013-10-20 06:12:31.931 |   File 
/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 417, in assertThat
  2013-10-20 06:12:31.931 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-20 06:12:31.931 | MismatchError: 200 != 400

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1242501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242597] Re: [OSSA 2013-032] Keystone trust circumvention through EC2-style tokens (CVE-2013-6391)

2014-03-20 Thread Alan Pevec
** Changed in: keystone/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1242597

Title:
  [OSSA 2013-032] Keystone trust circumvention through EC2-style tokens
  (CVE-2013-6391)

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone grizzly series:
  Fix Released
Status in Keystone havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  So I finally got around to investigating the scenario I mentioned in
  https://review.openstack.org/#/c/40444/, and unfortunately it seems
  that the ec2tokens API does indeed provide a way to circumvent the
  role delegation provided by trusts, and obtain all the roles of the
  trustor user, not just those explicitly delegated.

  Steps to reproduce:
  - Trustor creates a trust delegating a subset of roles
  - Trustee gets a token scoped to that trust
  - Trustee creates an ec2-keypair
  - Trustee makes a request to the ec2tokens API, to validate a signature 
created with the keypair
  - ec2tokens API returns a new token, which is not scoped to the trust and 
enables access to all the trustor's roles.

  I can provide some test code which demonstrates the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1242597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247675] Re: [OSSA 2013-036] Insufficient sanitization of Instance Name in Horizon (CVE-2013-6858)

2014-03-20 Thread Alan Pevec
** Changed in: horizon/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1247675

Title:
  [OSSA 2013-036] Insufficient sanitization of Instance Name in Horizon
  (CVE-2013-6858)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) grizzly series:
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA512

  Hello,

  My name is Chris Chapman, I am an Incident Manager with Cisco PSIRT.

  I would like to report the following XSS issue found in the OpenStack
  WebUI that was reported to Cisco.

  The details are as follows:

  The OpenStack web user interface is vulnerable to XSS:

  While launching (or editing) an instance, injecting script tags in
  the instance name results in the javascript being executed on the
  Volumes and the Network Topology page.  This is a classic Stored
  XSS vulnerability.

  Recommendations:
  - - Sanitize the Instance Name string to prevent XSS.
  - - Sanitize all user input to prevent XSS.
  - - Consider utilizing Content Security Policy (CSP). This can be used
  to prevent inline javascript from executing  only load javascript
  files from approved domains.  This would prevent XSS, even in
  scenarios where user input is not
  properly sanitized.

  
  Please include PSIRT-2070334443 in the subject line for all
  communications on this issue with Cisco going forward.

  If you can also include any case number that this issue is assigned
  that will help us track the issue.

  Thank you,
  Chris

  Chris Chapman | Incident Manager
  Cisco Product Security Incident Response Team - PSIRT
  Security Research and Operations
  Office: (949) 823-3167 | Direct: (562) 208-0043
  Email: chchcha...@cisco.com
  SIO: http://www.cisco.com/security
  PGP: 0x959B3169
  -BEGIN PGP SIGNATURE-
  Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
  Comment: GPGTools - http://gpgtools.org
  Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

  iQEcBAEBCgAGBQJSc8QQAAoJEPMPZe6VmzFpLw8H/1h2ZhqKJs6nxZDGnDpn3N2t
  6S6vwx3UYZGG5O1TTx1wrZkkHxckAg8GzMBJa6HFXPs1Zr0o9nhuLfvdKfShQFUA
  HqWMPOFPKid2LML2FMOGAWAdQAG6YTMknZ9d8JTvHI2BhluOsjxlOa0TBNr/Gm+Z
  iwAOBmAgJqU2nWx1iomiGhUpwX2oaQuqDyaosycpVtv0gQAtYsEf7zYdRNod7kB5
  6CGEXJ8J161Bd04dta99onFAB1swroOpOgUopUoONK4nHDxot/MojnvusDmWe2Fs
  usVLh7d6hB3eDyWpVFhbKwSW+Bkmku1Tl0asCgm1Uy9DkrY23UGZuIqKhFs5A8U=
  =gycf
  -END PGP SIGNATURE-

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1247675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1227027] Re: [OSSA 2014-001] Insecure directory permissions with snapshot code (CVE-2013-7048)

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1227027

Title:
  [OSSA 2014-001] Insecure directory permissions with snapshot code
  (CVE-2013-7048)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  In the following commit:

  commit 46de2d1e2d0abd6fdcd4da13facaf3225c721f5e
  Author: Rafi Khardalian r...@metacloud.com
  Date:   Sat Jan 26 09:02:19 2013 +

  Libvirt: Add support for live snapshots
  
  blueprint libvirt-live-snapshots
  

  There was the following chunk of code

   snapshot_directory = CONF.libvirt_snapshots_directory
   fileutils.ensure_tree(snapshot_directory)
   with utils.tempdir(dir=snapshot_directory) as tmpdir:
   try:
   out_path = os.path.join(tmpdir, snapshot_name)
  -snapshot.extract(out_path, image_format)
  +if live_snapshot:
  +# NOTE (rmk): libvirt needs to be able to write to the
  +# temp directory, which is owned nova.
  +utils.execute('chmod', '777', tmpdir, run_as_root=True)
  +self._live_snapshot(virt_dom, disk_path, out_path,
  +image_format)
  +else:
  +snapshot.extract(out_path, image_format)

  Making the temporary directory 777 does indeed give QEMU and libvirt
  permission to write there, because it gives every user on the whole
  system permission to write there. Yes, the directory name is
  unpredictable since it uses 'tempdir', this does not eliminate the
  security risk of making it world writable though.

  This flaw is highlighted by the following public commit which makes
  the mode configurable, but still defaults to insecure 777.

  https://review.openstack.org/#/c/46645/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1227027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239303] Re: trust-scoped tokens from v2 API have wrong user_id

2014-03-20 Thread Alan Pevec
** Changed in: keystone/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1239303

Title:
  trust-scoped tokens from v2 API have wrong user_id

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone grizzly series:
  Fix Released

Bug description:
  When requesting a trust scoped token via the v2 API with
  impersonation=True, the resulting user_id is wrong, it's the trustee
  not the trustor.

  The problem is comparing with 'True' string instead of boolean True
  here:

  
https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L184

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1239303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251152] Re: create instance with ephemeral disk fails

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251152

Title:
  create instance with ephemeral disk fails

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/comput
  e/manager.py, line 1037, in _build_instance
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] set_access_ip=set_access_ip)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/comput
  e/manager.py, line 1410, in _spawn
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] LOG.exception(_('Instance faile
  d to spawn'), instance=instance)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/comput
  e/manager.py, line 1407, in _spawn
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] block_device_info)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/driver.py, line 2063, in spawn
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] admin_pass=admin_password)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/driver.py, line 2370, in _create_image
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] ephemeral_size=ephemeral_gb)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/imagebackend.py, line 174, in cache
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] *args, **kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/virt/l
  ibvirt/imagebackend.py, line 307, in create_image
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] prepare_template(target=base, m
  ax_size=size, *args, **kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File /opt/stack/nova/nova/openst
  ack/common/lockutils.py, line 246, in inner
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] return f(*args, **kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c]   File 
/opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 162, in 
call_if_not_exists
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] fetch_func(target=target, *args, 
**kwargs)
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] TypeError: _create_ephemeral() got an 
unexpected keyword argument 'max_size'
  2013-11-14 16:31:23.032 TRACE nova.compute.manager [instance: 
f94c5521-0ce5-4be0-b35c-def82afc640c] 

  max_size argument was add in 3cdfe894ab58f7b91bf7fb690fc5bc724e44066f,
  when creating ephemeral disks , _create_ephemeral method will get an
  unexpected keyword argument  max_size

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1251152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188543] Re: NBD mount errors when booting an instance from volume

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188543

Title:
  NBD mount errors when booting an instance from volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  My environment:
  - Grizzly OpenStack (installed from Ubuntu repository)
  - Network using Quantum
  - Cinder backed up by a Ceph cluster

  I'm able to boot an instance from a volume but it takes a long time
  for the instance to be active. I've got warnings in the logs of the
  nova-compute node (see attached file). The logs show that the problem
  is related to file injection in the disk image which isn't
  required/relevant when booting from a volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1188543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211338] Re: Direct vs. direct in impl_qpid

2014-03-20 Thread Alan Pevec
** Changed in: neutron/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1211338

Title:
  Direct vs. direct in impl_qpid

Status in Cinder:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Won't Fix
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  impl_qpid.py has {type: Direct} (with a capital D) in one place.
  direct (lowercase) in others.  It appears that qpid is case-
  sensitive about exchange types, so the version with the capital D is
  invalid.  This ends up causing qpid to throw an error like:

   /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py,
   line 567, in _ewait\nself.check_error()\n', '  File
   /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py,
   line 556, in check_error\nraise self.error\n', 'NotFound:
   not-found: Exchange type not implemented: Direct
   (qpid/broker/SessionAdapter.cpp:117)(404)\n']

  It should be a one-character fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1211338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202266] Re: [OSSA 2013-030] xenapi: secgroups are not in place after live-migration (CVE-2013-4497)

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1202266

Title:
  [OSSA 2013-030] xenapi: secgroups are not in place after live-
  migration (CVE-2013-4497)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Distributed Setup with:
  2x Compute nodes with Debian Wheezy + XCP installed from repositories 
(Kronos). The VM controlling XCP is installed on Ubuntu Precise.
  1x Controller node with Keystone and Nova (except Network and Compute) on 
Ubuntu Precise with OpenStack Grizzly installed from cloud archive.
  1x Network node running Nova network with FlatDHCP (no quantum is used 
because it is not supported for XCP yet - I think it will starting with Havana 
release). The network node has 3 interfaces. 1x Public, 1x Management, 1x 
Tenant.
  1x Storage node running Cinder, Glance and NFSv3 for shared storage to 
support live migration

  
  I experiment with XCP and live migration these days so after I configured 
everything else, I tried to configure floating IP addresses as well. The 
configuration of the floating IP's was trivial but when I booted a VM, I 
instantly migrated it (that's what I am mostly testing) and then assigned a 
floating IP. Then I tried to ping it and connect to it using ssh and everything 
worked fine.

  I boot a second VM and this time I do not migrate it. I assign a
  floating IP address and no ping or ssh connection is possible to be
  made on this one even though the iptables have been setup correctly
  (the SNAT and DNAT). I migrate the VM and then I can connect to it
  using SSH without any problems.

  In the beginning I thought it is a bug and for some reason when you
  boot even though you should be able to connect, you cannot. After
  looking in the documentation I found this:
  http://docs.openstack.org/essex/openstack-compute/admin/content
  /configuring-openstack-compute-basics.html#enabling-access-to-vms-on-
  the-compute-node

  What I understood from this is that it is the other way around and I
  should NOT be able to ping or connect to the VMs using SSH by default
  if I don't explicitly add the secgroup rules to allow such actions.

  After adding these two rules everything works fine (I can access any vm, 
migrated or non-migrated):
  $ nova secgroup-add-rule default  icmp -1 -1 0.0.0.0/0
  $ nova secgroup-add-rule default  tcp 22 22 0.0.0.0/0

  After removing them again, I cannot access the non-migrated VM's
  (correct) but I can still access those that they were migrated once.

  Even when I migrate them back to the hypervisor originally booted on,
  the secgroups still do not apply and I can access those VM's.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1202266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218878] Re: GroupAffinityFilter and GroupAntiAffinityFilter filters are broken

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218878

Title:
  GroupAffinityFilter and GroupAntiAffinityFilter filters are broken

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released

Bug description:
  My test environment has 2 compute nodes: compute1 and compute3. First, I 
launch 1 instance (not being tied to any group) on each node:
  $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name local 
--availability-zone nova:compute1 vm-compute1-nogroup
  $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name local 
--availability-zone nova:compute3 vm-compute3-nogroup

  So far so good, everything's active:
  $ nova list
  
+--+-+++-+--+
  | ID   | Name| Status | Task 
State | Power State | Networks |
  
+--+-+++-+--+
  | 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE | None  
 | Running | private=10.0.0.3 |
  | c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE | None  
 | Running | private=10.0.0.4 |
  
+--+-+++-+--+

  Then I try to launch one instance in group 'foo' but it fails:
  $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name local 
--availability-zone nova:compute3 vm-compute3-nogroup
  $ nova list
  
+--+-+++-+--+
  | ID   | Name| Status | Task 
State | Power State | Networks |
  
+--+-+++-+--+
  | 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE | None  
 | Running | private=10.0.0.3 |
  | c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE | None  
 | Running | private=10.0.0.4 |
  | 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  | None  
 | NOSTATE |  |
  
+--+-+++-+--+

  I've pasted the scheduler logs [1] and my nova.conf file [2]. As you
  will see, the log message is there but it looks like group_hosts() [3]
  is returning all my hosts instead of only the ones that run instances
  from the group.

  [1] http://paste.openstack.org/show/45672/
  [2] http://paste.openstack.org/show/45671/
  [3] 
https://github.com/openstack/nova/blob/60a91f475a352e5e86bbd07b510cb32874110fef/nova/scheduler/driver.py#L137

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1207064] Re: VMWare : Disabling linked clone does not cache images on the datastore

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1207064

Title:
  VMWare : Disabling linked clone does not cache images on the datastore

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  Fix Released

Bug description:
  Comment from @dims on code review -
  https://review.openstack.org/#/c/37819/

  Shawn, i was going through the code again for this change and
  realized that when *not* using linked clone we don't the vmware_base
  dir at all which means we do a fresh transfer of image from glance to
  hyper every time. Would it be better to still have the vmdk in
  vmware_base and do a CopyDatastoreFile_Task or CopyVirtualDisk_Task to
  copy it over into the new directory from vmware_base for each new
  guest? In other words, we always cache vmdk(s) in vmware_base to skip
  the network transfer (and hence take the network transfer hit just one
  for the first time an image is needed in a hyper)

  Response from Shawn:
  I was dealing with a test related issue. That I finally resolved by 
re-introducing some bad code, comments are in line with the code. Can you see 
clear to view that change as out of scope of this patch? That feels like a 
separate issue from introducing new controls. I would prefer to open a separate 
bug for this since the feature to turn *off* linked-clone strategy was already 
present (as a global setting).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1207064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179370] Re: Testcase 'test_db_plugin.TestPortsV2.test_range_allocation' failed random

2014-03-20 Thread Alan Pevec
** Changed in: neutron/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1179370

Title:
  Testcase 'test_db_plugin.TestPortsV2.test_range_allocation' failed
  random

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Released

Bug description:
  2013-05-13 12:04:53,577ERROR [quantum.api.v2.resource] delete failed
  Traceback (most recent call last):
File /home/soulxu/work-code/openstack/quantum/quantum/api/v2/resource.py, 
line 82, in resource
  result = method(request=request, **args)
File /home/soulxu/work-code/openstack/quantum/quantum/api/v2/base.py, 
line 407, in delete
  obj_deleter(request.context, id, **kwargs)
File 
/home/soulxu/work-code/openstack/quantum/quantum/db/db_base_plugin_v2.py, 
line 1263, in delete_subnet
  raise q_exc.SubnetInUse(subnet_id=id)
  SubnetInUse: Unable to complete operation on subnet 
3f0b8b59-1084-4e59-bf0c-56a0eefb24d6. One or more ports have an IP allocation 
from this subnet.
  2013-05-13 12:04:53,587ERROR [quantum.api.v2.resource] delete failed
  Traceback (most recent call last):
File /home/soulxu/work-code/openstack/quantum/quantum/api/v2/resource.py, 
line 82, in resource
  result = method(request=request, **args)
File /home/soulxu/work-code/openstack/quantum/quantum/api/v2/base.py, 
line 407, in delete
  obj_deleter(request.context, id, **kwargs)
File 
/home/soulxu/work-code/openstack/quantum/quantum/db/db_base_plugin_v2.py, 
line 1021, in delete_network
  raise q_exc.NetworkInUse(net_id=id)
  NetworkInUse: Unable to complete operation on network 
276b6fb1-158f-4b30-ac19-6e959e9147d1. There are one or more ports still in use 
on the network.
  }}}

  Traceback (most recent call last):
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 1407, in test_range_allocation
  print res
File /usr/lib/python2.7/contextlib.py, line 35, in __exit__
  self.gen.throw(type, value, traceback)
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 566, in subnet
  self._delete('subnets', subnet['subnet']['id'])
File /usr/lib/python2.7/contextlib.py, line 35, in __exit__
  self.gen.throw(type, value, traceback)
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 537, in network
  self._delete('networks', network['network']['id'])
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 455, in _delete
  self.assertEqual(res.status_int, expected_code)
  MismatchError: 409 != 204

  
  --
  Ran 1 test in 0.220s

  FAILED (failures=1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1179370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1171284] Re: A network can't be disassociated from a project

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1171284

Title:
  A network can't be disassociated from a project

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released

Bug description:
  In vlan mode when I tried to disassociate a network from a project
  with  nova network-disassociate, I got the error as follows:

  $ nova network-disassociate f40cf324-15ee-42be-8e1d-b590675aafcc
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-cd1c726b-d06e-49d8-a948-24e2e453439a)

  2013-04-22 02:37:53ERROR [nova.api.openstack] Caught error: 'project'
  Traceback (most recent call last):
File /opt/stack/nova/nova/api/openstack/__init__.py, line 81, in __call__
  return req.get_response(self.application)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, 
in send
  application, catch_exc_info=False)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, 
in call_application
  app_iter = application(self.environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 451, in __call__
  return self.app(env, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 
131, in __call__
  response = self.app(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File /opt/stack/nova/nova/api/openstack/wsgi.py, line 899, in __call__
  content_type, body, accept)
File /opt/stack/nova/nova/api/openstack/wsgi.py, line 951, in 
_process_stack
  action_result = self.dispatch(meth, request, action_args)
File /opt/stack/nova/nova/api/openstack/wsgi.py, line 1030, in dispatch
  return method(req=request, **action_args)
File /opt/stack/nova/nova/api/openstack/compute/contrib/os_networks.py, 
line 77, in _disassociate_host_and_project
  self.network_api.associate(context, id, host=None, project=None)
File /opt/stack/nova/nova/network/api.py, line 90, in wrapped
  return func(self, context, *args, **kwargs)
File /opt/stack/nova/nova/network/api.py, line 366, in associate
  project = associations['project']
  KeyError: 'project'
  2013-04-22 02:37:53 INFO [nova.api.openstack] 
http://192.168.1.100:8774/v2/c619271b17564eed8fbb17570492d2d3/os-networks/f40cf324-15ee-42be-8e1d-b590675aafcc/action
 returned with HTTP 500

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1171284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179348] Re: Testcase 'test_create_subnet_with_two_host_routes' failed

2014-03-20 Thread Alan Pevec
** Changed in: neutron/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1179348

Title:
  Testcase 'test_create_subnet_with_two_host_routes' failed

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron grizzly series:
  Fix Released

Bug description:
  Traceback (most recent call last):
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 3286, in test_create_subnet_with_two_host_routes
  host_routes=host_routes)
File 
/home/soulxu/work-code/openstack/quantum/quantum/tests/unit/test_db_plugin.py,
 line 2301, in _test_create_subnet
  self.assertEqual(subnet['subnet'][k], keys[k])
  MismatchError: !=:
  reference = [{'destination': '12.0.0.0/8', 'nexthop': '4.3.2.1'},
   {'destination': '135.207.0.0/16', 'nexthop': '1.2.3.4'}]
  actual= [{'destination': '135.207.0.0/16', 'nexthop': '1.2.3.4'},
   {'destination': '12.0.0.0/8', 'nexthop': '4.3.2.1'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1179348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1171226] Re: VMwareVCDriver: Sparse disk copy error on spawn

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1171226

Title:
  VMwareVCDriver: Sparse disk copy error on spawn

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  Fix Committed

Bug description:
  Not sure if this is a real bug, or just a case of inadequate
  documentation combining with bad error reporting.  I get an exception
  (below) when booting a VM.  The exception happens after glance is done
  streaming the disk image to VC (i.e., I see the image in the
  vmware_source folder in the DataSource) and it prevents the VM from
  actually booting.

  I tried two different ways of adding the image to glance (both as
  'ovf' and as 'bare') neither of which seemed to make a difference:

  glance add name=Ubuntu-ovf disk_format=vmdk container_format=ovf
  is_public=true vmware_adaptertype=lsiLogic
  vmware_ostype=ubuntuGuest vmware_disktype=sparse 
  ~/ubuntu12.04-sparse.vmdk

  glance add name=Ubuntu-bare disk_format=vmdk container_format=bare
  is_public=true vmware_adaptertype=lsiLogic
  vmware_ostype=ubuntuGuest vmware_disktype=sparse 
  ~/ubuntu12.04-sparse.vmdk

  In both cases, I see this exception (note: there actually seems to be
  a second exception to, perhaps due to inproper error handling with the
  first):

  2013-04-21 11:35:07ERROR [nova.compute.manager] Error: ['Traceback (most 
recent call last):\n', '  File /opt/stack/nova/nova/
  compute/manager.py, line 905, in _run_instance\n
set_access_ip=set_access_ip)\n', '  File /opt/stack/nova/nova/compute/manage
  r.py, line 1165, in _spawn\nLOG.exception(_(\'Instance failed to 
spawn\'), instance=instance)\n', '  File /usr/lib/python2.7
  /contextlib.py, line 24, in __exit__\nself.gen.next()\n', '  File 
/opt/stack/nova/nova/compute/manager.py, line 1161, in _s
  pawn\nblock_device_info)\n', '  File 
/opt/stack/nova/nova/virt/vmwareapi/driver.py, line 176, in spawn\n
block_device_inf
  o)\n', '  File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 398, in 
spawn\n_copy_virtual_disk()\n', '  File /opt/stac
  k/nova/nova/virt/vmwareapi/vmops.py, line 340, in _copy_virtual_disk\n
self._session._wait_for_task(instance[\'uuid\'], vmdk_c
  opy_task)\n', '  File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 
558, in _wait_for_task\nret_val = done.wait()\n', 
  '  File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116, 
in wait\nreturn hubs.get_hub().switch()\n', '  F
  ile /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, 
in switch\nreturn self.greenlet.switch()\n', 'Nov
  aException: The requested operation is not implemented by the server.\n']
  2013-04-21 11:35:07DEBUG [nova.openstack.common.rpc.amqp] Making 
synchronous call on conductor ...
  2013-04-21 11:35:07DEBUG [nova.openstack.common.rpc.amqp] MSG_ID is 
2318255c5a4f4e5783cefb3cfde9e563
  2013-04-21 11:35:07DEBUG [nova.openstack.common.rpc.amqp] UNIQUE_ID is 
f710f7acfd774af3ba1aa91515b1fd05.
  2013-04-21 11:35:10  WARNING [nova.virt.vmwareapi.driver] Task 
[CopyVirtualDisk_Task] (returnval){
 value = task-925
 _type = Task
   } status: error The requested operation is not implemented by the server.
  2013-04-21 11:35:10  WARNING [nova.virt.vmwareapi.driver] In 
vmwareapi:_poll_task, Got this error Trying to re-send() an already-triggered 
event.
  2013-04-21 11:35:10ERROR [nova.utils] in fixed duration looping call
  Traceback (most recent call last):
File /opt/stack/nova/nova/utils.py, line 595, in _inner
  self.f(*self.args, **self.kw)
File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 584, in 
_poll_task
  done.send_exception(excep)
File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 208, 
in send_exception
  return self.send(None, args)
File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 150, 
in send
  assert self._result is NOT_USED, 'Trying to re-send() an 
already-triggered event.'
  AssertionError: Trying to re-send() an already-triggered event.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1171226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1158807] Re: Qpid SSL protocol

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

** Changed in: neutron/grizzly
   Status: Fix Committed = Fix Released

** Changed in: cinder/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1158807

Title:
  Qpid SSL protocol

Status in Cinder:
  Invalid
Status in Cinder grizzly series:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in neutron grizzly series:
  Fix Released
Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  By default, TCP is used as transport for QPID connections. If you like
  to enable SSL, there is a flat 'qpid_protocol = ssl' available in
  nova.conf. However, python-qpid client is awaiting transport type
  instead of protocol. It seems to be a bug:

  Solution:
  
(https://github.com/openstack/nova/blob/master/nova/openstack/common/rpc/impl_qpid.py#L323)

  WRONG:self.connection.protocol = self.conf.qpid_protocol
  CORRECT:self.connection.transport = self.conf.qpid_protocol

  Regards,
  JuanFra.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1158807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1160309] Re: Nova API floating IP error code inconsistent between Nova-Net and Quantum

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1160309

Title:
  Nova API floating IP error code inconsistent between Nova-Net and
  Quantum

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in Tempest:
  Fix Released

Bug description:
  If you ask for details of a floating IP address (GET.../os-
  floating-ips/id)   that is not allocated to you, then on a system
  with Nova-networking the error code is 404 itemNotFound,  whereas on a
  system with Quantum the error code is 500 computeFault.

  
  The Nova Floating IP API code (api/openstack/compute/contrib/floating_ips.py) 
 traps the NotFound exception raised by Nova-Net, but the quantum networking 
raises a QuantumClientException.

  Not clear to me if the network/quantumv2/api code can just trap that
  exception in this case and translate it to NotFound. or if we need a
  seperate exception from the quantum client


  Devstack with Nova-Net:
  

  $ curl -k -i 
http://10.2.1.79:8774/v2/7ac11f64dbf84c548f4161cf408b9799/os-floating-ips/1 -X 
GET -H X-Auth-Project-Id: demo -H User-Agent: python-novaclient -H Accept: 
application/json -H X-Auth-Token: 
  HTTP/1.1 200 OK
  X-Compute-Request-Id: req-c16fdbbe-dcda-4c3b-be46-d70a4fdade5d
  Content-Type: application/json
  Content-Length: 103
  Date: Tue, 26 Mar 2013 10:45:04 GMT

  {floating_ip: {instance_id: null, ip: 172.24.4.225,
  fixed_ip: null, id: 1, pool: nova}}

  $ nova floating-ip-delete 172.24.4.225

  $ curl -k -i 
http://10.2.1.79:8774/v2/7ac11f64dbf84c548f4161cf408b9799/os-floating-ips/1 -X 
GET -H X-Auth-Project-Id: demo -H User-Agent: python-novaclient -H Accept: 
application/json -H X-Auth-Token:  ..,.TTP/1.1 404 Not Found
  Content-Length: 76
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-61125f73-8989-4f00-9799-2d22e0ec4d51
  Date: Tue, 26 Mar 2013 10:45:28 GMT

  {itemNotFound: {message: Floating ip not found for id 1, code:
  404}}ubuntu@server-1357841265-az-3-region-a-geo-1:/mnt/devstack$


  DevStack with Quantum:
  
  $ curl -k -i 
http://10.2.2.114:8774/v2/18b18e535c6149b0bf71a42b46f2ab39/os-floating-ips/c7a3a81e-28c8-4b15-94f4-6ca55e9c437b
 -X GET -H X-Auth-Project-Id: demo -H User-Agent: python-novaclient -H 
Accept: application/json -H X-Auth-Token: ...HTTP/1.1 200 OK
  X-Compute-Request-Id: req-77b52904-6cd9-402d-93a2-124cfdcc86b2
  Content-Type: application/json
  Content-Length: 180
  Date: Tue, 26 Mar 2013 10:36:16 GMT

  {floating_ip: {instance_id: 09ffe9c9-0138-4f2f-b11b-
  c92e8d099b63, ip: 172.24.4.227, fixed_ip: 10.0.0.5, id:
  c7a3a81e-28c8-4b15-94f4-6ca55e9c437b, pool: nova}}

  
  $ nova floating-ip-delete 172.24.4.227

  $ curl -k -i
  http://10.2.2.114:8774/v2/18b18e535c6149b0bf71a42b46f2ab39/os-
  floating-ips/c7a3a81e-28c8-4b15-94f4-6ca55e9c437b -X GET -H X-Auth-
  Project-Id: demo -H User-Agent: python-novaclient -H Accept:
  application/json -H X-Auth-Token: ...

  HTTP/1.1 500 Internal Server Error
  Content-Length: 128
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-720eb948-ae3a-4837-ab95-958d70132aa5
  Date: Tue, 26 Mar 2013 10:39:09 GMT

  {computeFault: {message: The server has either erred or is
  incapable of performing the requested operation., code: 500}}


  From the API log:
  2013-03-25 19:11:00.377 DEBUG nova.api.openstack.wsgi 
[req-eda934a2-549d-4954-99b9-9dac74df01db 64090786631639 40099433467163] 
Calling method bound method FloatingIPController.show of 
nova.api.openstack.compute.contrib.floating_ips.FloatingIPController object at 
0x45c4ed0 _process_stack 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:924
  2013-03-25 19:11:00.510 ERROR nova.api.openstack 
[req-eda934a2-549d-4954-99b9-9dac74df01db 64090786631639 40099433467163] Caught 
error: Floating IP 8e9a5dfb-90f5-4fce-a82b-d814fe461d7b could not be found
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack Traceback (most recent 
call last):
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py, line 81, in 
__call__
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
  2013-03-25 19:11:00.510 65276 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2013-03-25 

[Yahoo-eng-team] [Bug 1073306] Re: [OSSA 2013-030] xenapi migrations don't apply security group filters (CVE-2013-4497)

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1073306

Title:
  [OSSA 2013-030] xenapi migrations don't apply security group filters
  (CVE-2013-4497)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  xenapi's finish_migration() is missing code to apply security group
  rules, etc.  There's code in spawn() that appears we need to also use
  in finish_migration().

  (Somewhat related, see: https://bugs.launchpad.net/bugs/1073303)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1073306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1138408] Re: delete_tap_interface method is needed

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1138408

Title:
  delete_tap_interface method is needed

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released

Bug description:
  In nova using libvirt, there is a method to create tap interface under 
linux_net.py but there is not one for removing them.
  Usually, nova plougins use either the delete_ovs_vif_port that invokes 
OVS-specific commands and will not work in an environment OVS-free, and the 
second available option is under QuantumLinuxBridgeInterfaceDriver::unplug but 
this one is attached to linux bridge. So, there is not native call for removing 
a tap interface based on the dev name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1138408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1200231] Re: Nova test suite breakage.

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1200231

Title:
  Nova test suite breakage.

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released

Bug description:
  FAIL: nova.tests.test_quota.QuotaIntegrationTestCase.test_too_many_addresses
  tags: worker-5
  --
  Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  Loading network driver 'nova.network.linux_net'
  Starting network node (version 2013.2)
  Quota exceeded for admin, tried to allocate floating IP
  }}}

  Traceback (most recent call last):
File /tmp/buildd/nova-2013.2.a1884.gb14f9cd/nova/tests/test_quota.py, 
line 130, in test_too_many_addresses
  db.floating_ip_destroy(context.get_admin_context(), address)
File /tmp/buildd/nova-2013.2.a1884.gb14f9cd/nova/db/api.py, line 288, in 
floating_ip_destroy
  return IMPL.floating_ip_destroy(context, address)
File /tmp/buildd/nova-2013.2.a1884.gb14f9cd/nova/db/sqlalchemy/api.py, 
line 120, in wrapper
  return f(*args, **kwargs)
File /tmp/buildd/nova-2013.2.a1884.gb14f9cd/nova/db/sqlalchemy/api.py, 
line 790, in floating_ip_destroy
  filter_by(address=address).\
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 1245, 
in filter_by
  for key, value in kwargs.iteritems()]
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 
278, in __eq__
  return self.operate(eq, other)
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 
252, in operate
  return op(self.comparator, *other, **kwargs)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 
278, in __eq__
  return self.operate(eq, other)
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/properties.py, line 
212, in operate
  return op(self.__clause_element__(), *other, **kwargs)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/util.py, line 490, 
in __eq__
  return self.__element.__class__.__eq__(self, other)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/operators.py, line 
278, in __eq__
  return self.operate(eq, other)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 
2300, in operate
  return op(self.comparator, *other, **kwargs)
File /usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, 
line 612, in __get__
  obj.__dict__[self.__name__] = result = self.fget(obj)
File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 
2286, in comparator
  return self.type.comparator_factory(self)
File /usr/lib/python2.7/dist-packages/sqlalchemy/types.py, line 629, in 
comparator_factory
  {})
  TypeError: Cannot create a consistent method resolution
  order (MRO) for bases TDComparator, Comparator

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1200231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270654] Re: test_different_fname_concurrency flakey fail

2014-03-20 Thread Alan Pevec
** Changed in: nova/grizzly
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270654

Title:
  test_different_fname_concurrency flakey fail

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  Looks like test_different_fname_concurrency has an intermittent fail

  ft1.9289: 
nova.tests.virt.libvirt.test_libvirt.CacheConcurrencyTestCase.test_different_fname_concurrency_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File nova/tests/virt/libvirt/test_libvirt.py, line 319, in 
test_different_fname_concurrency
  self.assertTrue(done2.ready())
File /usr/lib/python2.7/unittest/case.py, line 420, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true

  Full logs here: http://logs.openstack.org/91/58191/4/check/gate-nova-
  python27/413d398/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294132] Re: Volume status set to error extending when new size exceeds quota

2014-03-20 Thread Santiago Baldassin
John your analysis is correct, however, I don't think this is the
appropriate behavior. After what you described, the volume remains
unusable and there's no option either from command line or from horizon
to fix it. The user can only delete the volume and create it again. If
I'm trying to extend the volume and I for some reason I can't do it, I
expect the system to tell me why I can't do it and I also expect to be
able to continue using the volume.

** Summary changed:

- Volume status set to error extending when new size exceeds quota 
+ Volume status set to error extending when driver fails to extend the volume

** Description changed:

- extend_volume in cinder.volume.manager should not set the status to
- error_extending when the quota was exceeded. The status should still be
- available
+ If the driver can't extend the volume because for example, there's no
+ enough space, the volume status is set to error_extending and the
+ volume becomes unusable. The only options to the users are to delete the
+ volume and create it again

** Changed in: cinder
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1294132

Title:
  Volume status set to error extending when driver fails to extend the
  volume

Status in Cinder:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  If the driver can't extend the volume because for example, there's no
  enough space, the volume status is set to error_extending and the
  volume becomes unusable. The only options to the users are to delete
  the volume and create it again

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1294132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295424] [NEW] lbaas security group

2014-03-20 Thread Kevin Fox
Public bug reported:

There seems to be no way of specifying which security group a lbaas vip
gets. It looks to default to 'default' in Havana. When you place a load
balancer on a backend private neutron network, it gets the security
group member rules from 'default' which are for the wrong subnet.

Manually drilling down to find the port neutron port id, and then fixing
the security_group on the vip port does seem to work.

There needs to be a way to specify the security groups when you create
the vip.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295424

Title:
  lbaas security group

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There seems to be no way of specifying which security group a lbaas
  vip gets. It looks to default to 'default' in Havana. When you place a
  load balancer on a backend private neutron network, it gets the
  security group member rules from 'default' which are for the wrong
  subnet.

  Manually drilling down to find the port neutron port id, and then
  fixing the security_group on the vip port does seem to work.

  There needs to be a way to specify the security groups when you create
  the vip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295426] [NEW] get console output v3 API should allow null as the length

2014-03-20 Thread Ken'ichi Ohmichi
Public bug reported:

If running nova console-log command against v3 API, the command fails
like the following:

$ nova --os-compute-api-version 3 console-log vm01
ERROR: Invalid input for field/attribute length. Value: None. None is not of 
type 'integer', 'string' (HTTP 400) (Request-ID: 
req-b8588c9b-58a7-4e22-a2e9-30c5354ae4f7)
$

This is because API schema of the API does not allow null as the length of log.
However, get_console_output() of nova-compute allows null by the following code:

3942 def get_console_output(self, context, instance, tail_length):
3943 Send the console output for the given instance.
3944 instance = instance_obj.Instance._from_db_object(
3945 context, instance_obj.Instance(), instance)
3946 context = context.elevated()
3947 LOG.audit(_(Get console output), context=context,
3948   instance=instance)
3949 output = self.driver.get_console_output(context, instance)
3950
3951 if tail_length is not None:
3952 output = self._tail_log(output, tail_length)
3953
3954 return output.decode('utf-8', 'replace').encode('ascii', 'replace')

So the API also should allow it.

** Affects: nova
 Importance: Undecided
 Assignee: Ken'ichi Ohmichi (oomichi)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Ken'ichi Ohmichi (oomichi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1295426

Title:
  get console output v3 API should allow null as the length

Status in OpenStack Compute (Nova):
  New

Bug description:
  If running nova console-log command against v3 API, the command
  fails like the following:

  $ nova --os-compute-api-version 3 console-log vm01
  ERROR: Invalid input for field/attribute length. Value: None. None is not of 
type 'integer', 'string' (HTTP 400) (Request-ID: 
req-b8588c9b-58a7-4e22-a2e9-30c5354ae4f7)
  $

  This is because API schema of the API does not allow null as the length of 
log.
  However, get_console_output() of nova-compute allows null by the following 
code:

  3942 def get_console_output(self, context, instance, tail_length):
  3943 Send the console output for the given instance.
  3944 instance = instance_obj.Instance._from_db_object(
  3945 context, instance_obj.Instance(), instance)
  3946 context = context.elevated()
  3947 LOG.audit(_(Get console output), context=context,
  3948   instance=instance)
  3949 output = self.driver.get_console_output(context, instance)
  3950
  3951 if tail_length is not None:
  3952 output = self._tail_log(output, tail_length)
  3953
  3954 return output.decode('utf-8', 'replace').encode('ascii', 
'replace')

  So the API also should allow it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1295426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295435] [NEW] lbaas security group

2014-03-20 Thread Kevin Fox
Public bug reported:

Neutron lbaas does not support setting a security group on a load
balancer. This causes the wrong security group's rules to be associated
with a loadbalancer created in a private network. The neutron lbaas vip
create/edit form in Horizon needs to be extended to allow editing the
security groups associated with a load balancer vip once neutron
supports the feature.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1295435

Title:
  lbaas security group

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Neutron lbaas does not support setting a security group on a load
  balancer. This causes the wrong security group's rules to be
  associated with a loadbalancer created in a private network. The
  neutron lbaas vip create/edit form in Horizon needs to be extended to
  allow editing the security groups associated with a load balancer vip
  once neutron supports the feature.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1295435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295438] [NEW] BigSwitch plugin: Unnecessarily deletes ports from backend on network delete.

2014-03-20 Thread Kevin Benton
Public bug reported:

The Big Switch plugin unnecessarily deletes individual ports from the
controller when a network is being deleted. This isn't needed because
the controller will automatically clean up the ports when their parent
network is deleted.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295438

Title:
  BigSwitch plugin: Unnecessarily deletes ports from backend on network
  delete.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The Big Switch plugin unnecessarily deletes individual ports from the
  controller when a network is being deleted. This isn't needed because
  the controller will automatically clean up the ports when their parent
  network is deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295448] [NEW] Big Switch Restproxy unit test unnecessarily duplicates tests

2014-03-20 Thread Kevin Benton
Public bug reported:

The VIF type tests currently have separate classes that all extend the
ports test class. This means in addition to testing the VIF changing
logic, it's unnecessarily exercising a lot of code that is not impacted
by the VIF type.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295448

Title:
  Big Switch Restproxy unit test unnecessarily duplicates tests

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The VIF type tests currently have separate classes that all extend the
  ports test class. This means in addition to testing the VIF changing
  logic, it's unnecessarily exercising a lot of code that is not
  impacted by the VIF type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286264] Re: 'ProcessExecutionError' object has no attribute 'stdout'

2014-03-20 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1286264

Title:
  'ProcessExecutionError' object has no attribute 'stdout'

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Seems to be some internal guy running anvil @ y! that had the
  following error, creating bug to track it and investigate.

  

  YYOOM ERROR: Building Transaction failed
  YYOOM ERROR: Transaction failed: Transaction failed: 1
  ERROR: @anvil.packaging.helpers.yum_helper : Failed to parse YYOOM output: 
'ProcessExecutionError' object has no attribute 'stdout'

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1286264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294132] Re: Volume status set to error extending when driver fails to extend the volume

2014-03-20 Thread Huang Zhiteng
As the volume manager doesn't know what exactly happened during the
extending, and there are actually two kinds of possible failure (a.
volume was untouched; b. very very unlikely volume was touched or even
corrupted), it's safer to set volume state to 'error_extending' instead
of reset it back to 'available'.

Unless Cinder is actually able to confirm volume is in sane state, we
have to keep current behavior.

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1294132

Title:
  Volume status set to error extending when driver fails to extend the
  volume

Status in Cinder:
  Invalid
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  If the driver can't extend the volume because for example, there's no
  enough space, the volume status is set to error_extending and the
  volume becomes unusable. The only options to the users are to delete
  the volume and create it again

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1294132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295459] [NEW] Long pseudo-folder push actions off screen

2014-03-20 Thread Matthew D. Wood
Public bug reported:

In the Containers dashboard, create a pseudo-folder with a
LLOONN name without - or any other breaking
character.  Once you select the container, all action buttons are pushed
off the screen and are unavailable; they can't even be scrolled to.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1295459

Title:
  Long pseudo-folder push actions off screen

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the Containers dashboard, create a pseudo-folder with a
  LLOONN name without - or any other breaking
  character.  Once you select the container, all action buttons are
  pushed off the screen and are unavailable; they can't even be scrolled
  to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1295459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284065] Re: Swift client rpm build failed

2014-03-20 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1284065

Title:
  Swift client rpm build failed

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  I'm using grizzly stable branch.
   - sudo ./smithy -a build

  ProcessExecutionError: Unexpected error while running command.
  Command: make '-f' '/home/yr/openstack/deps/binary-anvil.mk' '-j' 2
  Exit code: 2
  Stdout: redirected to open file 
'/home/yr/openstack/deps/output/binary-anvil.mk.log', mode 'wb' at 0x25c6540
  Stderr: redirected to open file 
'/home/yr/openstack/deps/output/binary-anvil.mk.log', mode 'wb' at 0x25c6540

  
   - tail /home/yr/openstack/deps/output/binary-anvil.mk.log
  Building for 
/home/yr/openstack/repo/anvil-source/python-swiftclient-2.0.2.8.gf4e0579-1.el6.src.rpm
 in 
/home/yr/openstack/deps/rpmbuild/python-swiftclient-2.0.2.8.gf4e0579-1.el6.src.rpm
  Output for build being placed in 
/home/yr/openstack/deps/output/rpmbuild-python-swiftclient-2.0.2.8.gf4e0579-1.el6.src.rpm.log
  make: *** [python-swiftclient-2.0.2.8.gf4e0579-1.el6.src.rpm.mark] Error 1

  
  - tail 
/home/yr/openstack/deps/output/rpmbuild-python-swiftclient-2.0.2.8.gf4e0579-1.el6.src.rpm.log

  Checking for unpackaged file(s): /usr/lib/rpm/check-files 
/home/yr/openstack/deps/rpmbuild/python-swiftclient-2.0.2.8.gf4e0579-1.el6.src.rpm/BUILDROOT/python-swiftclient-2.0.2.8.gf4e0579-1.el6.x86_64
  error: Installed (but unpackaged) file(s) found:
 /usr/share/man/man1/swift.1.gz

  error: Installed (but unpackaged) file(s) found:
 /usr/share/man/man1/swift.1.gz

  RPM build errors:
  Installed (but unpackaged) file(s) found:
 /usr/share/man/man1/swift.1.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1284065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278828] Re: Branch info is shown incorrectly while downloading

2014-03-20 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1278828

Title:
  Branch info is shown incorrectly while downloading

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  If branch name is missing in config file, then branch name is shown as
  'None' instead of 'master'

  Output:

  INFO: @anvil.components.base_install : |-- 
git://github.com/openstack/oslo.config.git
  INFO: @anvil.downloader : Downloading 
git://github.com/openstack/oslo.config.git (None) to 
/root/openstack/oslo-config/app.
  INFO: @anvil.downloader : Adjusting to tag 1.2.1.
  INFO: @anvil.downloader : Removing tags: 1.3.0a0
  INFO: @anvil.actions.prepare : Performed 1 downloads.
  INFO: @anvil.actions.prepare : Downloading keystone.
  INFO: @anvil.components.base_install : Downloading from 1 uris:
  INFO: @anvil.components.base_install : |-- 
git://github.com/openstack/keystone.git
  INFO: @anvil.downloader : Downloading git://github.com/openstack/keystone.git 
(None) to /root/openstack/keystone/app.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1278828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260892] Re: DistributionNotFound: No distributions at all found for oslo.messaging=1.2.0a11

2014-03-20 Thread Joshua Harlow
** Changed in: anvil/icehouse
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1260892

Title:
  DistributionNotFound: No distributions at all found for
  oslo.messaging=1.2.0a11

Status in ANVIL for forging OpenStack.:
  Fix Released
Status in anvil icehouse series:
  Fix Released

Bug description:
  When building using:

  $ ./smithy -a prepare -p conf/personas/in-a-box/basic-all.yaml

  It appears that pip-download can not find (oslo.messaging=1.2.0a11) -
  which is not on pypi (yet) but is needed by various projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1260892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262467] Re: Networkx not getting packaged (due to unpackaged files)

2014-03-20 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1262467

Title:
  Networkx not getting packaged (due to unpackaged files)

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Seems like when py2rpm builds networkx - rpm (for that taskflow
  project) the rpm has doc examples that are getting left out of
  packaging, likely we need to remove those or include those to avoid
  these errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1262467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295426] Re: get console output v3 API should allow null as the length

2014-03-20 Thread Ken'ichi Ohmichi
This bug seems python-novaclient.
Current API behavior is

v2 API
  without length: Return full console output
  length=10 : Return 10 lines of console output
  lenght=null : Return full console output

v3 API
  without length: Return full console output
  length=10 : Return 10 lines of console output
  lenght=null : Return BadRequest

so novaclient should not pass length to v3 API if needing to get unlimit
output.


** Project changed: nova = python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1295426

Title:
  get console output v3 API should allow null as the length

Status in Python client library for Nova:
  In Progress

Bug description:
  If running nova console-log command against v3 API, the command
  fails like the following:

  $ nova --os-compute-api-version 3 console-log vm01
  ERROR: Invalid input for field/attribute length. Value: None. None is not of 
type 'integer', 'string' (HTTP 400) (Request-ID: 
req-b8588c9b-58a7-4e22-a2e9-30c5354ae4f7)
  $

  This is because API schema of the API does not allow null as the length of 
log.
  However, get_console_output() of nova-compute allows null by the following 
code:

  3942 def get_console_output(self, context, instance, tail_length):
  3943 Send the console output for the given instance.
  3944 instance = instance_obj.Instance._from_db_object(
  3945 context, instance_obj.Instance(), instance)
  3946 context = context.elevated()
  3947 LOG.audit(_(Get console output), context=context,
  3948   instance=instance)
  3949 output = self.driver.get_console_output(context, instance)
  3950
  3951 if tail_length is not None:
  3952 output = self._tail_log(output, tail_length)
  3953
  3954 return output.decode('utf-8', 'replace').encode('ascii', 
'replace')

  So the API also should allow it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1295426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295491] [NEW] expected active pool error raised in lbaas agent

2014-03-20 Thread Aaron Rosen
Public bug reported:

2014-03-20 14:51:23.246 29247 ERROR neutron.openstack.common.rpc.amqp 
[req-7805af6e-ec03-4f79-92c0-396144d6fc58 None] Exception during message 
handling
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp Traceback 
(most recent call last):
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py, line 462, in 
_process_data
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
**args)
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 45, in dispatch
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/common/agent_driver_base.py,
 line 99, in get_logical_device
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp raise 
n_exc.Invalid(_('Expected active pool'))
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp Invalid: 
Expected active pool
2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
2014-03-20 14:51:23.247 29247 ERROR neutron.openstack.common.rpc.common 
[req-7805af6e-ec03-4f79-92c0-396144d6fc58 None] Returning exception Expected 
active pool to caller

** Affects: neutron
 Importance: Undecided
 Assignee: Aaron Rosen (arosen)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295491

Title:
  expected active pool error raised in lbaas agent

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  2014-03-20 14:51:23.246 29247 ERROR neutron.openstack.common.rpc.amqp 
[req-7805af6e-ec03-4f79-92c0-396144d6fc58 None] Exception during message 
handling
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py, line 462, in 
_process_data
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 45, in dispatch
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/common/agent_driver_base.py,
 line 99, in get_logical_device
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
raise n_exc.Invalid(_('Expected active pool'))
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
Invalid: Expected active pool
  2014-03-20 14:51:23.246 29247 TRACE neutron.openstack.common.rpc.amqp 
  2014-03-20 14:51:23.247 29247 ERROR neutron.openstack.common.rpc.common 
[req-7805af6e-ec03-4f79-92c0-396144d6fc58 None] Returning exception Expected 
active pool to caller

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295494] [NEW] NVP service plugin should check advanced service in use before deleting a router

2014-03-20 Thread berlin
Public bug reported:

When using NVP advanced service plugin, it should check whether there is
service inserted into the router before deleting it.

** Affects: neutron
 Importance: Undecided
 Assignee: berlin (linb)
 Status: New


** Tags: nicira

** Changed in: neutron
 Assignee: (unassigned) = berlin (linb)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1295494

Title:
  NVP service plugin should check advanced service in use before
  deleting a router

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When using NVP advanced service plugin, it should check whether there
  is service inserted into the router before deleting it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1295494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282089] Re: keystone client is leaving hanging connections to the server

2014-03-20 Thread Florent Flament
I read again Jamie Lennox's article about Sessions:
http://www.jamielennox.net/blog/2014/02/24/client-session-objects/

As well as Python requests module doc:
http://docs.python-requests.org/en/latest/user/advanced/

FWIU, keystoneclient.session.Session and requests.Session objetcs are
not meant to be shared between different users. Instances of these
classes store information related to a single user (for instance a
user's token). Therefore, we shouldn't have one unique Session
instance shared amongst all users. However, it would make sense to
have a global connection pool inside Horizon.  Looks like the
requests.adapters.HTTPAdapater would be a good candidate for such http
connections pool.

Well, I think that's also what was saying Dean Troyer there (Feb 19):
https://review.openstack.org/#/c/74720/

IMHO, keystoneclient.session.Session cannot be mapped on a user
session properly by Horizon, since Horizon doesn't know when a user's
session is terminated (unless he explicitly clicks on the logout
button), and can't close the session. Therefore, with Horizon, we
can at best use one keystoneclient.session.Session per request. And
this session should be closed (to release the connections used during
this session) without relying on the GC, which apparently isn't
efficient enough to avoid the current bug.

From the tests I've been doing, I think that most connections leakage
come from django_openstack_auth module. python-keystoneclient is used
in Horizon too. The api/keystone.py module will need to be fixed too.

I've started to implement a connections pool in django_openstack_auth,
but it looks that python-keystoneclient doesn't like when Clients are
instanciated with an unauthenticated Session object as argument -
Further investigating.


** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1282089

Title:
  keystone client is leaving hanging connections to the server

Status in Django OpenStack Auth:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in Python client library for Keystone:
  In Progress

Bug description:
  This is remarkable noticeable from Horizon which use keystoneclient to
  connect to the keystone server and at each request this later is left
  hanged there which consume the keystone server and at one point this
  will result to having keystone server process exceeding the limit of
  connection that is allowed to handle (ulimit of open filed).

  ## How to check:

  If you have horizon installed so just keep using it normally (creating
  instances ) while keeping an eye on the server number of opened
  files lsof -p keystone-pid you can see that the number increment
  pretty quickly.

  To reproduce this bug very fast try launching 40 instances at the same time
  for example using Instance Count field.

  ## Why:

  This because keystone client doesn't reuse the http connection pool,
  so in a long running service (e.g. horizon) the effect will be a new
  connections created for each request no connection reuse.

  Patch coming soon with more details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1282089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp