[Yahoo-eng-team] [Bug 1434378] [NEW] openvswitch-agent leak fanout queues

2015-03-19 Thread Tiantian Gao
Public bug reported:

If exceptions raise during neutron-server initiation, will result in
lots of fanout queues leak in RabbitMQ. Since these fanout queues will
continue to receive message, the size of  it will grow infinitely until
RabbitMQ refuse to accept new message.

Reproduction
==

In file neutron/plugins/openvswitch/agent/ovs_neutron_agent.py :

class OVSNeutronAgent(.):
def __init__(self, integ_br, tun_br, local_ip..):


self.setup_rpc()

raise Exception(boo)   ##  insert a fake exception here
..
self.connection.consume_in_thread()

Start neutron-openvswitch-agent server, it will quit because the
exception.

check out the RabbitMQ queues:

$ sudo rabbitmqctl list_queues|grep fanout

q-agent-notifier-l2population-update_fanout_db311643213548ff95d2c044418c6d900
q-agent-notifier-network-delete_fanout_37e3d19330404548870d60128196e73b 0
q-agent-notifier-port-update_fanout_a8eb4b571182445181097a9bd02a4fc00
q-agent-notifier-security_group-update_fanout_b376f030b88844089b57e87849113399  0
q-agent-notifier-tunnel-update_fanout_6c46338f0d1a4d08bfe777427c2c5c08  0

Although we are using fake exception in this case, in real world, there
are many cases that will raise exception between self.setup_rpc() and
consume_in_thread(). Even neutron-openvswitch-agent recuperated, these
fanout queue will continue receive messages that from neutron-server,
result in certain infinitely growing queue there.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  If exceptions raise during neutron-server initiation, will result in
  lots of fanout queues leak in RabbitMQ. Since these fanout queues will
  continue to receive message, the size of  it will grow infinitely until
  RabbitMQ refuse to accept new message.
  
  Reproduction
  ==
  
  In file neutron/plugins/openvswitch/agent/ovs_neutron_agent.py :
  
- class OVSNeutronAgent(.):  
- def __init__(self, integ_br, tun_br, local_ip..):
- 
- 
- self.setup_rpc()
- 
- raise Exception(boo)   ##  insert a fake exception here
- ..
- self.connection.consume_in_thread()
+ class OVSNeutronAgent(.):
+ def __init__(self, integ_br, tun_br, local_ip..):
+ 
+ 
+ self.setup_rpc()
+ 
+ raise Exception(boo)   ##  insert a fake exception here
+ ..
+ self.connection.consume_in_thread()
  
- 
- Start neutron-openvswitch-agent server, it will quite because the exception.
+ Start neutron-openvswitch-agent server, it will quit because the
+ exception.
  
  check out the RabbitMQ queues:
  
  $ sudo rabbitmqctl list_queues|grep fanout
  
  q-agent-notifier-l2population-update_fanout_db311643213548ff95d2c044418c6d90  
  0
  q-agent-notifier-network-delete_fanout_37e3d19330404548870d60128196e73b 0
  q-agent-notifier-port-update_fanout_a8eb4b571182445181097a9bd02a4fc00
  
q-agent-notifier-security_group-update_fanout_b376f030b88844089b57e87849113399  0
  q-agent-notifier-tunnel-update_fanout_6c46338f0d1a4d08bfe777427c2c5c08  0
  
- 
- Although we are using fake exception in this case, in real world, there are 
many cases that will raise exception between self.setup_rpc() and 
consume_in_thread(). Even neutron-openvswitch-agent recuperated, these fanout 
queue will continue receive messages that from neutron-server, result in 
certain infinitely growing queue there.
+ Although we are using fake exception in this case, in real world, there
+ are many cases that will raise exception between self.setup_rpc() and
+ consume_in_thread(). Even neutron-openvswitch-agent recuperated, these
+ fanout queue will continue receive messages that from neutron-server,
+ result in certain infinitely growing queue there.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434378

Title:
  openvswitch-agent leak fanout queues

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If exceptions raise during neutron-server initiation, will result in
  lots of fanout queues leak in RabbitMQ. Since these fanout queues will
  continue to receive message, the size of  it will grow infinitely
  until RabbitMQ refuse to accept new message.

  Reproduction
  ==

  In file neutron/plugins/openvswitch/agent/ovs_neutron_agent.py :

  class OVSNeutronAgent(.):
  def __init__(self, integ_br, tun_br, local_ip..):
  
  
  self.setup_rpc()
  
  raise Exception(boo)   ##  insert a fake exception here
  ..
  self.connection.consume_in_thread()

  Start neutron-openvswitch-agent server, it will quit because the
  exception.

  check out the RabbitMQ queues:

  $ sudo rabbitmqctl 

[Yahoo-eng-team] [Bug 1408992] [NEW] nova-manage shell ipython failed

2015-01-09 Thread Tiantian Gao
Public bug reported:

1. pip install ipython

2. nova-manage shell ipython

Command failed, please check log for more info
2015-01-09 11:58:54.025 29402 CRITICAL nova [-] AttributeError: 'module' object 
has no attribute 'Shell'
2015-01-09 11:58:54.025 29402 TRACE nova Traceback (most recent call last):
2015-01-09 11:58:54.025 29402 TRACE nova   File /usr/local/bin/nova-manage, 
line 10, in module
2015-01-09 11:58:54.025 29402 TRACE nova sys.exit(main())
2015-01-09 11:58:54.025 29402 TRACE nova   File 
/opt/stack/nova/nova/cmd/manage.py, line 1336, in main
2015-01-09 11:58:54.025 29402 TRACE nova ret = fn(*fn_args, **fn_kwargs)
2015-01-09 11:58:54.025 29402 TRACE nova   File 
/opt/stack/nova/nova/cmd/manage.py, line 157, in ipython
2015-01-09 11:58:54.025 29402 TRACE nova self.run('ipython')
2015-01-09 11:58:54.025 29402 TRACE nova   File 
/opt/stack/nova/nova/cmd/manage.py, line 184, in run
2015-01-09 11:58:54.025 29402 TRACE nova shell = 
IPython.Shell.IPShell(argv=[])
2015-01-09 11:58:54.025 29402 TRACE nova AttributeError: 'module' object has no 
attribute 'Shell'
2015-01-09 11:58:54.025 29402 TRACE nova 

because these code only work for ipython  0.11

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408992

Title:
  nova-manage shell ipython failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  1. pip install ipython

  2. nova-manage shell ipython

  Command failed, please check log for more info
  2015-01-09 11:58:54.025 29402 CRITICAL nova [-] AttributeError: 'module' 
object has no attribute 'Shell'
  2015-01-09 11:58:54.025 29402 TRACE nova Traceback (most recent call last):
  2015-01-09 11:58:54.025 29402 TRACE nova   File /usr/local/bin/nova-manage, 
line 10, in module
  2015-01-09 11:58:54.025 29402 TRACE nova sys.exit(main())
  2015-01-09 11:58:54.025 29402 TRACE nova   File 
/opt/stack/nova/nova/cmd/manage.py, line 1336, in main
  2015-01-09 11:58:54.025 29402 TRACE nova ret = fn(*fn_args, **fn_kwargs)
  2015-01-09 11:58:54.025 29402 TRACE nova   File 
/opt/stack/nova/nova/cmd/manage.py, line 157, in ipython
  2015-01-09 11:58:54.025 29402 TRACE nova self.run('ipython')
  2015-01-09 11:58:54.025 29402 TRACE nova   File 
/opt/stack/nova/nova/cmd/manage.py, line 184, in run
  2015-01-09 11:58:54.025 29402 TRACE nova shell = 
IPython.Shell.IPShell(argv=[])
  2015-01-09 11:58:54.025 29402 TRACE nova AttributeError: 'module' object has 
no attribute 'Shell'
  2015-01-09 11:58:54.025 29402 TRACE nova 

  because these code only work for ipython  0.11

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389941] [NEW] reload not change listen and listen_port

2014-11-05 Thread Tiantian Gao
Public bug reported:

Action reload means a daemon nova-api service receives a SIGHUP
signal, it will reload config files and restart itself.

Ideally, reload should reflect any changed config. But configs like
'osapi_compute_listen', 'osapi_compute_listen_port' not work currently.

[reproduct]
1. run nova-api as a daemon
2. change 'osapi_compute_listen_port' in /etc/nova/nova.conf
3. kill -HUP $pid_of_nova_api_parent

Then you can find the bind address and bind port still not change.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389941

Title:
  reload not change listen and listen_port

Status in OpenStack Compute (Nova):
  New

Bug description:
  Action reload means a daemon nova-api service receives a SIGHUP
  signal, it will reload config files and restart itself.

  Ideally, reload should reflect any changed config. But configs like
  'osapi_compute_listen', 'osapi_compute_listen_port' not work
  currently.

  [reproduct]
  1. run nova-api as a daemon
  2. change 'osapi_compute_listen_port' in /etc/nova/nova.conf
  3. kill -HUP $pid_of_nova_api_parent

  Then you can find the bind address and bind port still not change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1389941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382390] [NEW] nova-api should shutdown gracefully

2014-10-17 Thread Tiantian Gao
Public bug reported:

In IceHouse, An awesome feature got implemented: graceful shutdown nova
service, which can make sure in-process RPC request got done before kill
the process.

But nova-api not support graceful shutdown now, which can cause problem
when do upgrading. For example, when a request to create an instance was
in-progress, kill the nova-api may lead to quota not sync or odd
database records. Especially in large-scale development, there are
hundreds of request in a second, kill the nova-api will interrupt lots
in-process greenlet.

In nova/wsgi.py, when stoping WSGI service, we first shrink the greenlet
pool size to 0, then kill the eventlet wsgi server. The work around is
quick and easy: wait for all greenlets in the pool to finish, then kill
wsgi server. The code looks like below:


diff --git a/nova/wsgi.py b/nova/wsgi.py
index ba52872..3c89297 100644
--- a/nova/wsgi.py
+++ b/nova/wsgi.py
@@ -212,6 +212,9 @@ class Server(object):
 if self._server is not None:
 # Resize pool to stop new requests from being processed
 self._pool.resize(0)
+num = self._pool.running()
+LOG.info(_(Waiting WSGI server to finish %d requests. % num))
+self._pool.waitall()
 self._server.kill()
 
 def wait(self):

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382390

Title:
  nova-api should shutdown gracefully

Status in OpenStack Compute (Nova):
  New

Bug description:
  In IceHouse, An awesome feature got implemented: graceful shutdown
  nova service, which can make sure in-process RPC request got done
  before kill the process.

  But nova-api not support graceful shutdown now, which can cause
  problem when do upgrading. For example, when a request to create an
  instance was in-progress, kill the nova-api may lead to quota not sync
  or odd database records. Especially in large-scale development, there
  are hundreds of request in a second, kill the nova-api will interrupt
  lots in-process greenlet.

  In nova/wsgi.py, when stoping WSGI service, we first shrink the
  greenlet pool size to 0, then kill the eventlet wsgi server. The work
  around is quick and easy: wait for all greenlets in the pool to
  finish, then kill wsgi server. The code looks like below:

  
  diff --git a/nova/wsgi.py b/nova/wsgi.py
  index ba52872..3c89297 100644
  --- a/nova/wsgi.py
  +++ b/nova/wsgi.py
  @@ -212,6 +212,9 @@ class Server(object):
   if self._server is not None:
   # Resize pool to stop new requests from being processed
   self._pool.resize(0)
  +num = self._pool.running()
  +LOG.info(_(Waiting WSGI server to finish %d requests. % num))
  +self._pool.waitall()
   self._server.kill()
   
   def wait(self):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323475] [NEW] Losting network info_cache sometimes

2014-05-26 Thread Tiantian Gao
Public bug reported:

We are using stable/havana.

For some inexplicable reason, some instances lost network information.
The result looks like:


$ nova list
| a8f8a437-d203-4265-aca2-7bd35539c5d1 | test   
   | ACTIVE | -| Running |  
  
$ neutron port-list --device-id a8f8a437-d203-4265-aca2-7bd35539c5d1
+--+--+---++
| id   | name | mac_address   | fixed_ips   
   |
+--+--+---++
| 6b042778-76bb-45ca-86a8-abfdb1ba1a62 |  | fa:16:3e:67:9a:88 | 
{subnet_id: 90b338d3-7711-48fd-a0f6-11a27388cb42, ip_address: 
10.162.82.2} |
| 9800fd03-5e07-4a54-8568-28d501073c5f |  | fa:16:3e:d0:86:4a | 
{subnet_id: 9a1fc59d-aec1-4e3a-bd88-99ea558e8b29, ip_address: 
192.168.0.5} |
+--+--+---++

neutron said there are two ports binding with the instance, but nova
said the instance has no port.

We dug logs, and found somethings went wrong after running
heal_instance_info_cache. One line log said the instance info_cache is
[], but the previous log said the instance info_cache is filled. From
that time, the info_cache lost, and can't self-healing.

The simple logs pasted below, and full log here:
http://paste.openstack.org/show/81605/



2014-05-26 03:47:13.258 14884 DEBUG nova.network.api [-] Updating cache with 
info: [VIF({'ovs_interfaceid': u'5953e098-e131-48eb-b53c-5eb095f3bfee', 
'network': Network({'bridge': 'br-int', 'subne
ts': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 
'floating_ips': [], 'address': u'10.162.81.4'})], 'version': 4, 'meta': 
{'dhcp_server': u'10.162.81.3'}, 'dns': [], 'rout
es': [], 'cidr': u'10.162.81.0/28', 'gateway': IP({'meta': {}, 'version': None, 
'type': 'gateway', 'address': None})})], 'meta': {'injected': False, 
'tenant_id': u'c10373fb5d234e31af4d5d56527994f
c'}, 'id': u'b0bb08c1-dc05-4e17-a021-f3b850a823ba', 'label': 
u'idc_c10373fb5d234e31af4d5d56527994fc'}), 'devname': u'tap5953e098-e1', 
'qbh_params': None, 'meta': {}, 'address': u'fa:16:3e:40:34:4
c', 'type': u'ovs', 'id': u'5953e098-e131-48eb-b53c-5eb095f3bfee', 
'qbg_params': None})] update_instance_cache_with_nw_info 
/usr/lib/python2.7/dist-packages/nova/network/api.py:71
2014-05-26 03:47:13.263 14884 DEBUG nova.compute.manager [-] [instance: 
49a806a9-986e-4ce3-ae9f-d3c4317255a3] Updated the info_cache for instance 
_heal_instance_info_cache /usr/lib/python2.7/dist
-packages/nova/compute/manager.py:5146
.
2014-05-26 03:52:14.255 14884 DEBUG nova.network.api [-] Updating cache with 
info: [] update_instance_cache_with_nw_info 
/usr/lib/python2.7/dist-packages/nova/network/api.py:71
.


I try hard but can't no re-product the bug manual, The key problem here is why 
the info_cache not showing up. But on the other hand, we'd better give nova the 
ability to self-healing in this case.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1323475

Title:
  Losting network info_cache sometimes

Status in OpenStack Compute (Nova):
  New

Bug description:
  We are using stable/havana.

  For some inexplicable reason, some instances lost network information.
  The result looks like:

  
  $ nova list
  | a8f8a437-d203-4265-aca2-7bd35539c5d1 | test 
 | ACTIVE | -| Running |  

  $ neutron port-list --device-id a8f8a437-d203-4265-aca2-7bd35539c5d1
  
+--+--+---++
  | id   | name | mac_address   | fixed_ips 
 |
  
+--+--+---++
  | 6b042778-76bb-45ca-86a8-abfdb1ba1a62 |  | fa:16:3e:67:9a:88 | 
{subnet_id: 90b338d3-7711-48fd-a0f6-11a27388cb42, ip_address: 
10.162.82.2} |
  | 9800fd03-5e07-4a54-8568-28d501073c5f |  | fa:16:3e:d0:86:4a | 
{subnet_id: 9a1fc59d-aec1-4e3a-bd88-99ea558e8b29, ip_address: 
192.168.0.5} |
  

[Yahoo-eng-team] [Bug 1290293] [NEW] memcache backend token can not delete a token

2014-03-10 Thread Tiantian Gao
Public bug reported:

I found the bug in stable/havana, which is handy to re-product.

configure keystone to using memcache backend, so the config file
keystone.conf will look like below:



[token]
 driver = keystone.token.backends.memcache.Token
..
[memcache]
servers=127.0.0.1:11211
..


when delete a token through API: DELETE 
http://10.120.120.250:35357/v2.0/tokens/89f15a7a3481456780c1254c8225dcb9 will 
return
500, 
{
error: {
message: Unable to add token to revocation list.,
code: 500,
title: Internal Server Error
}
}

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1290293

Title:
  memcache backend token can not delete a token

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I found the bug in stable/havana, which is handy to re-product.

  configure keystone to using memcache backend, so the config file
  keystone.conf will look like below:

  

  [token]
   driver = keystone.token.backends.memcache.Token
  ..
  [memcache]
  servers=127.0.0.1:11211
  ..

  
  when delete a token through API: DELETE 
http://10.120.120.250:35357/v2.0/tokens/89f15a7a3481456780c1254c8225dcb9 will 
return
  500, 
  {
  error: {
  message: Unable to add token to revocation list.,
  code: 500,
  title: Internal Server Error
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1290293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278291] Re: log_handler miss some log information

2014-02-20 Thread Tiantian Gao
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278291

Title:
  log_handler miss  some log information

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed

Bug description:
  log_handler:PublishErrorsHandler just emit the `msg` attribute of
  record. But many times we log with extra arguments, like
  LOG.debug('start %s', blabla), which will result in only show start
  %s in notification payload.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1278291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267040] [NEW] v3 API list deleted servers raise 404

2014-01-08 Thread Tiantian Gao
Public bug reported:

nova api support list deleted servers by using admin user. The api looks
like below:

v3: GET http://openstack.org:8774/v3/servers/detail?deleted=True
v2: GET http://openstack.org:8774/v2/{tenant}/servers/detail?deleted=True

v2 api works very well but v3 will return by 404.

The traceback from nova-api show below, I suspect there was something wrong 
with instance object:
2014-01-08 09:42:15.145 ERROR nova.api.openstack 
[req-dd73dfe5-96ab-488a-8c31-98b5b063ed95 admin admin] Caught error: Instance 
60ec98b5-7496-4e04-aebb-3f951e660295 could not be found.
2014-01-08 09:42:15.145 TRACE nova.api.openstack Traceback (most recent call 
last):
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 121, in __call__
2014-01-08 09:42:15.145 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
2014-01-08 09:42:15.145 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
2014-01-08 09:42:15.145 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-01-08 09:42:15.145 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 581, in __call__
2014-01-08 09:42:15.145 TRACE nova.api.openstack return self.app(env, 
start_response)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-01-08 09:42:15.145 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-01-08 09:42:15.145 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
2014-01-08 09:42:15.145 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-01-08 09:42:15.145 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2014-01-08 09:42:15.145 TRACE nova.api.openstack resp = self.call_func(req, 
*args, **self.kwargs)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2014-01-08 09:42:15.145 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 930, in __call__
2014-01-08 09:42:15.145 TRACE nova.api.openstack content_type, body, accept)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 1018, in _process_stack
2014-01-08 09:42:15.145 TRACE nova.api.openstack request, action_args)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 900, in 
post_process_extensions
2014-01-08 09:42:15.145 TRACE nova.api.openstack **action_args)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/pci.py, line 79, in 
detail
2014-01-08 09:42:15.145 TRACE nova.api.openstack 
self._extend_server(server, instance)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/pci.py, line 58, in 
_extend_server
2014-01-08 09:42:15.145 TRACE nova.api.openstack for dev in 
instance.pci_devices:
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/objects/base.py, line 63, in getter
2014-01-08 09:42:15.145 TRACE nova.api.openstack self.obj_load_attr(name)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/objects/instance.py, line 498, in obj_load_attr
2014-01-08 09:42:15.145 TRACE nova.api.openstack expected_attrs=[attrname])
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/objects/base.py, line 112, in wrapper
2014-01-08 09:42:15.145 TRACE nova.api.openstack result = fn(cls, context, 
*args, **kwargs)
2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/objects/instance.py, line 300, 

[Yahoo-eng-team] [Bug 1203603] Re: SecurityGroups does not take action-level policies into consideration

2013-07-24 Thread TianTian Gao
Converted to blueprint https://blueprints.launchpad.net/nova/+spec
/enforce-security-policy

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1203603

Title:
  SecurityGroups does not take action-level policies into consideration

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Now, actions about security groups are only controlled by rule
  'compute:security_groups'. So policy rules like
  index/create/update/delete not works.

  I hope we can get fine-grained controls on security-group policy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1203603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188105] Re: Unable to delete default security group

2013-06-22 Thread TianTian Gao
I take a try in the master code:

$ nova secgroup-delete default
ERROR: Unable to delete system group 'default' (HTTP 400) (Request-ID: 
req-1062a62c-c79d-4495-9c7d-28086c9838ac)

There is really a limitation to `default` security group:
file: nova/compute/api.py
...
RO_SECURITY_GROUPS = ['default']

def destroy(self, context, security_group):
if security_group['name'] in RO_SECURITY_GROUPS:
msg = _(Unable to delete system group '%s') % \
security_group['name']
self.raise_invalid_group(msg)

** Changed in: nova
   Status: Confirmed = Invalid

** Changed in: python-novaclient
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188105

Title:
  Unable to delete default security group

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Nova:
  Invalid

Bug description:
  Dear everyone,

  I've failed to delete default security group. But I can delete the
  other. :(

  To reproduce:

  1. nova secgroup-list
  +-+-+
  | Name| Description |
  +-+-+
  | default | default |
  +-+-+

  2. nova secgroup-delete default
  3. nova secgroup-list
  +-+-+
  | Name| Description |
  +-+-+
  | default | default |
  +-+-+

  Is it a bug or limitation? Thank you all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1188105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp