[Yahoo-eng-team] [Bug 1341040] Re: neutron CLI should not allow user to create /32 subnet

2014-07-21 Thread Akihiro Motoki
** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Changed in: python-neutronclient
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341040

Title:
  neutron CLI should not allow user to create /32 subnet

Status in Python client library for Neutron:
  New

Bug description:
  I'm using devstack stable/icehouse, and my neutron version is
  1409da70959496375f1ac45457663a918ec8

  I created an internal network not connected to the router.  If I 
mis-configure the subnet, Horizon will catch the problem, but not neutron CLI.
  Subsequently VM cannot be created on this misconfigured subnet, as it ran out 
of IP to offer to the VM.

  > neutron net-create test-net
  Created a new network:
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | id | b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a |
  | name   | test-net |
  | shared | False|
  | status | ACTIVE   |
  | subnets|  |
  | tenant_id  | 8092813be8fd4122a20ee3a6bfe91162 |
  ++--+

  If I use Horizon, go to "Networks", "test-net", "Create Subnet", then use 
parameters,
Subnet Name: subnet-1
Network Address: 10.10.150.0/32
IP Version: IPv4
  Horizon returns the error message "The subnet in the Network Address is too 
small (/32)."

  If I use neutron CLI,

  > neutron subnet-create --name subnet-1 test-net 10.10.150.0/32
  Created a new subnet:
  +--+--+
  | Field| Value|
  +--+--+
  | allocation_pools |  |
  | cidr | 10.10.150.0/32   |
  | dns_nameservers  |  |
  | enable_dhcp  | True |
  | gateway_ip   | 10.10.150.1  |
  | host_routes  |  |
  | id   | 4142ff1d-28de-4e77-b82b-89ae604190ae |
  | ip_version   | 4|
  | name | subnet-1 |
  | network_id   | b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a |
  | tenant_id| 8092813be8fd4122a20ee3a6bfe91162 |
  +--+--+

  > neutron net-list
  
+--+--+-+
  | id   | name | subnets   
  |
  
+--+--+-+
  | 0dd5722d-f535-42ec-9257-437c05e4de25 | private  | 
81859ee5-4ea5-4e60-ab2a-ba74146d39ba 10.0.0.0/24|
  | 27c1649d-f6fc-4893-837d-dbc293fc4b80 | public   | 
6c1836a1-eb7d-4acb-ad6f-6c394cedced5|
  | b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a | test-net | 
4142ff1d-28de-4e77-b82b-89ae604190ae 10.10.150.0/32 |
  
+--+--+-+

  > nova boot --image cirros-0.3.1-x86_64-uec --flavor m1.tiny --nic 
net-id=b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a vm2
  :
  :

  > nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power > 
State | Networks |
  
+--+--+++-+--+
  | d98511f7-452c-4ab6-8af9-d73576714c87 | vm1  | ACTIVE | -  | Running 
| private=10.0.0.2 |
  | b12b6a6d-4ab9-43b2-825c-ae656a7aafc4 | vm2  | ERROR  | -  | NOSTATE 
|  |
  
+--+--+++-+--+

  I get this output from screen:

  2014-07-11 18:37:32.327 DEBUG neutronclient.client [-] RESP:409
  CaseInsensitiveDict({'date': 'Sat, 12 Jul 2014 01:37:32 GMT',
  'content-length': '164', 'content-type': 'application/json;
  charset=UTF-8', 'x-openstack-request-id': 'req-35a49577-5a3d-
  4a98-a790-52694f09d59a'}) {"NeutronError": {"message": "No more IP
  addresses available on network b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a.",
  "type": "IpAddressGenerationFailure", "detail": ""}}

  2014-07-11 18:37:32.327 DEBUG neutr

[Yahoo-eng-team] [Bug 1346741] [NEW] Enable "Stop Instance" button

2014-07-21 Thread Thiago Martins
Public bug reported:

Add a "Stop Instance" button to Horizon, so it will be possible to
shutdown a instance using ACPI call (like running "virsh shutdown
instance-00XX" directly at the Compute Node.

Currently, the Horizon button "Shut Off Instance" just destroy it.

I'm not seeing a way to gracefully shutdown a instance from Horizon.

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- Enable "Stop Instance" buttons
+ Enable "Stop Instance" button

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1346741

Title:
  Enable "Stop Instance" button

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Add a "Stop Instance" button to Horizon, so it will be possible to
  shutdown a instance using ACPI call (like running "virsh shutdown
  instance-00XX" directly at the Compute Node.

  Currently, the Horizon button "Shut Off Instance" just destroy it.

  I'm not seeing a way to gracefully shutdown a instance from Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1346741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292712] Re: migrate to 36 fails

2014-07-21 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1292712

Title:
  migrate to 36 fails

Status in OpenStack Identity (Keystone):
  Expired

Bug description:
  On rdo havana I tried a db_sync and it fails:

  2014-03-14 13:49:14.890 10938 INFO migrate.versioning.api [-] 35 -> 36... 
  2014-03-14 13:49:14.891 10938 DEBUG migrate.versioning.util [-] Disposing 
SQLAlchemy engine 
Engine(mysql://keystone_admin:096cde2717d44d87@130.20.232.220/keystone) 
with_engine 
/usr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py:162
  2014-03-14 13:49:14.892 10938 CRITICAL keystone [-] (OperationalError) (1091, 
"Can't DROP 'ix_token_valid'; check that column/key exists") '\nDROP INDEX 
ix_token_valid ON token' ()
  2014-03-14 13:49:14.892 10938 TRACE keystone Traceback (most recent call 
last):
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/bin/keystone-manage", line 51, in 
  2014-03-14 13:49:14.892 10938 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/keystone/cli.py", line 218, in main
  2014-03-14 13:49:14.892 10938 TRACE keystone CONF.command.cmd_class.main()
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/keystone/cli.py", line 72, in main
  2014-03-14 13:49:14.892 10938 TRACE keystone 
migration.db_sync(version=version)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/keystone/common/sql/migration.py", line 61, 
in db_sync
  2014-03-14 13:49:14.892 10938 TRACE keystone return 
migrate_repository(version, current_version, repo_path)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/keystone/common/sql/migration.py", line 45, 
in migrate_repository
  2014-03-14 13:49:14.892 10938 TRACE keystone repo_path, version)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/api.py", line 186, in 
upgrade
  2014-03-14 13:49:14.892 10938 TRACE keystone return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File "", line 2, in 
_migrate
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py", line 
159, in with_engine
  2014-03-14 13:49:14.892 10938 TRACE keystone return f(*a, **kw)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/api.py", line 366, in 
_migrate
  2014-03-14 13:49:14.892 10938 TRACE keystone schema.runchange(ver, 
change, changeset.step)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/schema.py", line 91, in 
runchange
  2014-03-14 13:49:14.892 10938 TRACE keystone change.run(self.engine, step)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/script/py.py", line 145, 
in run
  2014-03-14 13:49:14.892 10938 TRACE keystone script_func(engine)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/keystone/common/sql/migrate_repo/versions/036_token_drop_valid_index.py",
 line 25, in upgrade
  2014-03-14 13:49:14.892 10938 TRACE keystone idx.drop(migrate_engine)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/schema.py",
 line 2277, in drop
  2014-03-14 13:49:14.892 10938 TRACE keystone 
bind._run_visitor(ddl.SchemaDropper, self)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
 line 2303, in _run_visitor
  2014-03-14 13:49:14.892 10938 TRACE keystone 
conn._run_visitor(visitorcallable, element, **kwargs)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py",
 line 1973, in _run_visitor
  2014-03-14 13:49:14.892 10938 TRACE keystone 
**kwargs).traverse_single(element)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/sql/visitors.py",
 line 106, in traverse_single
  2014-03-14 13:49:14.892 10938 TRACE keystone return meth(obj, **kw)
  2014-03-14 13:49:14.892 10938 TRACE keystone   File 
"/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/ddl.py",
 line 159, in visit_index
  2014-03-14 13:49:14.892 10938 TRACE keystone 
self.connect

[Yahoo-eng-team] [Bug 1318487] Re: Got error Message objects do not support str() because they may..

2014-07-21 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1318487

Title:
  Got error Message objects do not support str() because they may..

Status in OpenStack Identity (Keystone):
  Expired

Bug description:
  I was trying keystone integration with Tivoli directoy server, in the course 
of playing around several times I got an error message.
  The file was 
/usr/lib/python2.6/site-packages/keystone/openstack/common/gettextutils.py
  Instead of raising exception, I modified the __str__ method to instead call 
translate. And I started getting proper error logs.
  I am not a python expert and may be __str__ is not suppose to return double 
byte chars, I wish I could provide the exact setting that gives this error, but 
I am not getting it anymore, even after I have reverted my change.

   def __str__(self):
  # NOTE(luisg): Logging in python 2.6 tries to str() log records,
  # and it expects specifically a UnicodeError in order to proceed.
  msg = _('Message objects do not support str() because they may '
  'contain non-ascii characters. '
  'Please use unicode() or translate() instead.')
  #mahesh commented out the code below
  return translate(self)
  #raise UnicodeError(msg)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1318487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346712] [NEW] Create a new directory for Arista drivers

2014-07-21 Thread Sukhdev Kapur
Public bug reported:

When we created neutron/plugins/ml2/drivers/mech_arista, the intent was
only to develop ML2 Mechanism driver. Now that we are getting ready to
develop additional drivers, it does not make sense to move them into
mech_arista directory. Therefore, the plan is to rename mech_arista to
arista (a more generic name) -  so that additional drivers can be moved
into this directory and the code can be shared between the drivers.

** Affects: neutron
 Importance: Undecided
 Assignee: Sukhdev Kapur (sukhdev-8)
 Status: New


** Tags: ml2 neutron

** Tags added: ml2 neutron

** Changed in: neutron
 Assignee: (unassigned) => Sukhdev Kapur (sukhdev-8)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346712

Title:
  Create a new directory for Arista drivers

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When we created neutron/plugins/ml2/drivers/mech_arista, the intent
  was only to develop ML2 Mechanism driver. Now that we are getting
  ready to develop additional drivers, it does not make sense to move
  them into mech_arista directory. Therefore, the plan is to rename
  mech_arista to arista (a more generic name) -  so that additional
  drivers can be moved into this directory and the code can be shared
  between the drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-07-21 Thread Sam Leong
** Also affects: marconi
   Importance: Undecided
   Status: New

** Changed in: marconi
   Status: New => In Progress

** Changed in: marconi
 Assignee: (unassigned) => Sam Leong (chio-fai-sam-leong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  In Progress
Status in OpenStack Message Queuing Service (Marconi):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Python client library for Keystone:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  In Progress
Status in Openstack Database (Trove):
  New

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346673] [NEW] fixtures in neutron.tests.base blow away default database config

2014-07-21 Thread Mike Bayer
Public bug reported:

Really trying to narrow this one down fully, and just putting this up
because this is as far as I've gotten.

Basically, the lines in neutron/tests/base.py:

  line 159:self.addCleanup(CONF.reset)
  line 182:self.useFixture(self.messaging_conf)

cause cfg.CONF to get totally wiped out in the "database" config.  I
don't yet understand why this is the case.

if you then run any test that extends BaseTestCase, and then run
neutron/tests/unit/test_db_plugin.py -> NeutronDbPluginV2AsMixinTestCase
in the same process, these two tests fail:

Traceback (most recent call last):
  File 
"/Users/classic/dev/redhat/openstack/neutron/neutron/tests/unit/test_db_plugin.py",
 line 3943, in setUp
self.plugin = importutils.import_object(DB_PLUGIN_KLASS)
  File 
"/Users/classic/dev/redhat/openstack/neutron/neutron/openstack/common/importutils.py",
 line 38, in import_object
return import_class(import_str)(*args, **kwargs)
  File 
"/Users/classic/dev/redhat/openstack/neutron/neutron/db/db_base_plugin_v2.py", 
line 72, in __init__
db.configure_db()
  File "/Users/classic/dev/redhat/openstack/neutron/neutron/db/api.py", line 
45, in configure_db
register_models()
  File "/Users/classic/dev/redhat/openstack/neutron/neutron/db/api.py", line 
68, in register_models
facade = _create_facade_lazily()
  File "/Users/classic/dev/redhat/openstack/neutron/neutron/db/api.py", line 
34, in _create_facade_lazily
_FACADE = session.EngineFacade.from_config(cfg.CONF, sqlite_fk=True)
  File 
"/Users/classic/dev/redhat/openstack/neutron/.tox/py27/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py",
 line 977, in from_config
retry_interval=conf.database.retry_interval)
  File 
"/Users/classic/dev/redhat/openstack/neutron/.tox/py27/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py",
 line 893, in __init__
**engine_kwargs)
  File 
"/Users/classic/dev/redhat/openstack/neutron/.tox/py27/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py",
 line 650, in create_engine
if "sqlite" in connection_dict.drivername:
AttributeError: 'NoneType' object has no attribute 'drivername'

I'm getting this error running tox on a subset of tests, however it's
difficult to reproduce as the subprocesses have to work out just right.

To reproduce, just install nose and do:

.tox/py27/bin/nosetests -v
neutron.tests.unit.test_db_plugin:DbModelTestCase
neutron.tests.unit.test_db_plugin:NeutronDbPluginV2AsMixinTestCase

That is, DbModelTestCase is a harmless test but because it runs
base.BaseTestCase first, cfg.CONF gets blown away.

I don't know what the solution should be here, cfg.CONF shouldn't be
reset but I don't know what "messaging_conffixture.ConfFixture" is or
how "CONF.reset" was supposed to work as it blows away DB config.  The
cfg.CONF in the first place seems to get set up via this path:

  (7)exec2()
  
/Users/classic/dev/redhat/openstack/neutron/neutron/tests/unit/test_db_plugin.py(26)()
-> from neutron.api import extensions
  
/Users/classic/dev/redhat/openstack/neutron/neutron/api/extensions.py(31)()
-> from neutron import manager
  /Users/classic/dev/redhat/openstack/neutron/neutron/manager.py(20)()
-> from neutron.common import rpc as n_rpc
  
/Users/classic/dev/redhat/openstack/neutron/neutron/common/rpc.py(22)()
-> from neutron import context
  /Users/classic/dev/redhat/openstack/neutron/neutron/context.py(26)()
-> from neutron import policy
  /Users/classic/dev/redhat/openstack/neutron/neutron/policy.py(55)()
-> cfg.CONF.import_opt('policy_file', 'neutron.common.config')
  
/Users/classic/dev/redhat/openstack/neutron/.tox/py27/lib/python2.7/site-packages/oslo/config/cfg.py(1764)import_opt()
-> __import__(module_str)
  
/Users/classic/dev/redhat/openstack/neutron/neutron/common/config.py(135)()
-> max_overflow=20, pool_timeout=10)
> /Users/classic/dev/redhat/openstack/neutron/.tox/py27/lib/python2.7/site-packages/oslo/db/options.py(145)set_defaults()
-> conf.register_opts(database_opts, group='database')

e.g. oslo.db set_defaults() sets it up.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346673

Title:
  fixtures in neutron.tests.base blow away default database config

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Really trying to narrow this one down fully, and just putting this up
  because this is as far as I've gotten.

  Basically, the lines in neutron/tests/base.py:

line 159:self.addCleanup(CONF.reset)
line 182:self.useFixture(self.messaging_conf)

  cause cfg.CONF to get totally wiped out in the "database" config.  I
  don't yet understand why this is the case.

  if you then run any test that extends BaseTestCase, and then run
  neutron/tests/unit/test_db_plugin.py ->
  NeutronDbPluginV2AsMixinTestCase in the same process

[Yahoo-eng-team] [Bug 1344755] Re: libvirt OVS hybrid VIF driver does not honor network_device_mtu config

2014-07-21 Thread Akihiro Motoki
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1344755

Title:
  libvirt OVS hybrid VIF driver does not honor network_device_mtu config

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  plug_ovs_hybrid VIF driver in libvirt/vif.py does not honor 
network_device_mtu configuration variable.
  It prevents operators from using jumbo frame.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1344755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277217] Re: Cisco plugin should use common network type consts

2014-07-21 Thread Henry Gessau
** Changed in: neutron
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277217

Title:
  Cisco plugin should use common network type consts

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  The Cisco plugin was not covered by
  4cdccd69a45aec19d547c10f29f61359b69ad6c1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1277217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346658] [NEW] All DB model classes should be consolidated into one directory

2014-07-21 Thread Henry Gessau
Public bug reported:

We have discussed moving all models out of their current diverse
locations to one directory, like maybe

  neutron/db/models/*.py

The idea is to move just the model classes (not the entire modules that
they currently reside in) here. Then head.py would be able to

  from neutron.db.models import *  # noqa

and this would have much less baggage than importing all the current
modules.

The convention of putting all models in one directory will be quite easy
to follow and maintain.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: db

** Tags added: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346658

Title:
  All DB model classes should be consolidated into one directory

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We have discussed moving all models out of their current diverse
  locations to one directory, like maybe

neutron/db/models/*.py

  The idea is to move just the model classes (not the entire modules
  that they currently reside in) here. Then head.py would be able to

from neutron.db.models import *  # noqa

  and this would have much less baggage than importing all the current
  modules.

  The convention of putting all models in one directory will be quite
  easy to follow and maintain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346648] [NEW] glance v1 API missing target for policy checks

2014-07-21 Thread Scott Devoid
Public bug reported:

API calls in glance.api.v1.images call the _enforce() helper method for
various actions: "create_image", "update_image", "delete_image", etc.
but do not pass the image as the target for the policy check. [1]

This means that you cannot provide access to these APIs on a per-object
basis. Furthermore it is inconsistent with the way other projects handle
policy checks.

[1]
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L154

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1346648

Title:
  glance v1 API missing target for policy checks

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  API calls in glance.api.v1.images call the _enforce() helper method
  for various actions: "create_image", "update_image", "delete_image",
  etc. but do not pass the image as the target for the policy check. [1]

  This means that you cannot provide access to these APIs on a per-
  object basis. Furthermore it is inconsistent with the way other
  projects handle policy checks.

  [1]
  https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L154

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1346648/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346647] [NEW] Manage Project Member page interface is broken

2014-07-21 Thread Lin Hua Cheng
Public bug reported:


The table listing the project members is getting displayed below the table that 
list the non-meber.  This happens when clicking the role dropdown in the 
members table

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- Mange Project Member page interface is broken
+ Manage Project Member page interface is broken

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1346647

Title:
  Manage Project Member page interface is broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  The table listing the project members is getting displayed below the table 
that list the non-meber.  This happens when clicking the role dropdown in the 
members table

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1346647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346637] [NEW] VMware: remove ESX driver for juno

2014-07-21 Thread Aaron Rosen
Public bug reported:

The ESX driver was deprecated in Icehouse and should be removed in Juno.
This bug is for the removal of the ESX virt driver in nova.

** Affects: nova
 Importance: High
 Assignee: akash (akashg1611)
 Status: New


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) => akash (akashg1611)

** Changed in: nova
   Importance: Undecided => High

** Tags added: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346637

Title:
  VMware: remove ESX driver for juno

Status in OpenStack Compute (Nova):
  New

Bug description:
  The ESX driver was deprecated in Icehouse and should be removed in
  Juno. This bug is for the removal of the ESX virt driver in nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346638] [NEW] neutron-db-manage --autogenerate needs update after DB healing

2014-07-21 Thread Henry Gessau
Public bug reported:

Now that the DB is healed, neutron-db-manage revision --autogenerate needs to 
be updated.
The template should do unconditional upgrade/downgrade.
The env.py should include all models from head to compare against the schema.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: db

** Tags added: db

** Description changed:

- Now that the DB is healed, neutrion-db-manage revision --autogenerate needs 
to be updated.
+ Now that the DB is healed, neutron-db-manage revision --autogenerate needs to 
be updated.
  The template should do unconditional upgrade/downgrade.
  The env.py should include all models from head to compare against the schema.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346638

Title:
  neutron-db-manage --autogenerate needs update after DB healing

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now that the DB is healed, neutron-db-manage revision --autogenerate needs to 
be updated.
  The template should do unconditional upgrade/downgrade.
  The env.py should include all models from head to compare against the schema.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-07-21 Thread Guang Yee
** Also affects: trove
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Python client library for Keystone:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  In Progress
Status in Openstack Database (Trove):
  New

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346602] [NEW] special character swift container names cause errors

2014-07-21 Thread Cindy Lu
Public bug reported:

In Containers panel, create a new container with the name "façade"

After pressing Save, it will show a success and error message. Please
see attached image (A).

The error message says 'Unable to retrieve object list" but the list
seems to show up properly.

Then when you click on 'View Details' the screen just flickers.

Console shows:
ClientException: Container HEAD failed: 
http://:8080/v1/AUTH_2d63c14b7e4d44cb89ca10fccc0abf5b/fa%25C3%25A7ade 404 
Not Found
Recoverable error: Container HEAD failed: 
http://:8080/v1/AUTH_2d63c14b7e4d44cb89ca10fccc0abf5b/fa%25C3%25A7ade 404 
Not Found

And when you click on the container name, it will show many errors.
Please see image (B).

Related to: https://bugs.launchpad.net/horizon/+bug/1336603

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Untitled.png"
   
https://bugs.launchpad.net/bugs/1346602/+attachment/4159511/+files/Untitled.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1346602

Title:
  special character swift container names cause errors

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Containers panel, create a new container with the name "façade"

  After pressing Save, it will show a success and error message. Please
  see attached image (A).

  The error message says 'Unable to retrieve object list" but the list
  seems to show up properly.

  Then when you click on 'View Details' the screen just flickers.

  Console shows:
  ClientException: Container HEAD failed: 
http://:8080/v1/AUTH_2d63c14b7e4d44cb89ca10fccc0abf5b/fa%25C3%25A7ade 404 
Not Found
  Recoverable error: Container HEAD failed: 
http://:8080/v1/AUTH_2d63c14b7e4d44cb89ca10fccc0abf5b/fa%25C3%25A7ade 404 
Not Found

  And when you click on the container name, it will show many errors.
  Please see image (B).

  Related to: https://bugs.launchpad.net/horizon/+bug/1336603

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1346602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346549] [NEW] VMware: "Storage error: Unable to find iSCSI Target"

2014-07-21 Thread Ryan Hsu
Public bug reported:

The following Tempest tests are failing with the VMware nova driver:

tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_attached_volume
tempest.api.compute.servers.test_delete_server.DeleteServersTestXML.test_delete_server_while_in_attached_volume
tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume
tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestXML.test_rescued_vm_detach_volume

The following error is seen in the logs:

Traceback (most recent call last):
  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
incoming.message))
  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)
  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)
  File "/opt/stack/nova/nova/compute/manager.py", line 405, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/exception.py", line 88, in wrapped
payload)
  File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/exception.py", line 71, in wrapped
return f(self, context, *args, **kw)
  File "/opt/stack/nova/nova/compute/manager.py", line 290, in 
decorated_function
pass
  File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/compute/manager.py", line 276, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 318, in 
decorated_function
kwargs['instance'], e, sys.exc_info())
  File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/compute/manager.py", line 306, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/opt/stack/nova/nova/compute/manager.py", line 4269, in attach_volume
bdm.destroy(context)
  File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/compute/manager.py", line 4266, in attach_volume
return self._attach_volume(context, instance, driver_bdm)
  File "/opt/stack/nova/nova/compute/manager.py", line 4287, in _attach_volume
self.volume_api.unreserve_volume(context, bdm.volume_id)
  File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/compute/manager.py", line 4279, in _attach_volume
do_check_attach=False, do_driver_attach=True)
  File "/opt/stack/nova/nova/virt/block_device.py", line 45, in wrapped
ret_val = method(obj, context, *args, **kwargs)
  File "/opt/stack/nova/nova/virt/block_device.py", line 249, in attach
connector)
  File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/virt/block_device.py", line 240, in attach
device_type=self['device_type'], encryption=encryption)
  File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 646, in 
attach_volume
mountpoint)
  File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 388, in 
attach_volume
self._attach_volume_iscsi(connection_info, instance, mountpoint)
  File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 363, in 
_attach_volume_iscsi
reason=_("Unable to find iSCSI Target"))
StorageError: Storage error: Unable to find iSCSI Target

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346549

Title:
  VMware: "Storage error: Unable to find iSCSI Target"

Status in OpenStack Compute (Nova):
  New

Bug description:
  The following Tempest tests are failing with the VMware nova driver:

  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_attached_volume
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestXML.test_delete_server_while_in_attached_volume
  
tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume
  
tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestXML.test_rescued_vm_detach_volume

  The following error is seen in the logs:

  Traceback (most recent call last):

[Yahoo-eng-team] [Bug 1346525] [NEW] Snapshots when using RBD backend make full copy then upload unnecessarily

2014-07-21 Thread Michael H Wilson
Public bug reported:

When performing a snapshot a local copy is made. In the case of RBD, it
reads what libvirt thinks is a raw block device and then converts that
to a local raw file. The file is then uploaded to glance, which reads
the whole raw file and stores it in the backend, if the backend is Ceph
this is completely unnecessary. The fix should go something like this:


1. Tell Ceph to make a snapshot of the RBD
2. Get Ceph metadata from backend, send that to Glance
3. Glance gets metadata, if it has Ceph backend no download is necessary, if it 
doesn't download image from Ceph location, store in backend

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346525

Title:
  Snapshots when using RBD backend make full copy then upload
  unnecessarily

Status in OpenStack Compute (Nova):
  New

Bug description:
  When performing a snapshot a local copy is made. In the case of RBD,
  it reads what libvirt thinks is a raw block device and then converts
  that to a local raw file. The file is then uploaded to glance, which
  reads the whole raw file and stores it in the backend, if the backend
  is Ceph this is completely unnecessary. The fix should go something
  like this:

  
  1. Tell Ceph to make a snapshot of the RBD
  2. Get Ceph metadata from backend, send that to Glance
  3. Glance gets metadata, if it has Ceph backend no download is necessary, if 
it doesn't download image from Ceph location, store in backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262914] Re: Unnecessary data copy during cold snapshot

2014-07-21 Thread Michael H Wilson
** Changed in: nova
   Status: Opinion => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262914

Title:
  Unnecessary data copy during cold snapshot

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  When creating a cold snapshot, LibvirtDriver.snapshot() creates a
  local copy of the VM image before uploading from that copy into a new
  image in Glance.

  In case of snapshotting a local file backed VM to Swift, that's one
  copy too many:  if the target format matches the source format, the
  local file can be uploaded directly, halving the time it takes to
  create a snapshot. In case of snapshotting an RBD backed VM to RBD
  backed Glance, that's two copies too many: a copy-on-write clone of
  the VM drive could obviate the need to copy any data at all.

  I think that instead of passing the target location as a temporary
  file path under snapshots_directory, LibvirtDriver.snapshot() should
  pass image metadata to Image.snapshot_extract() and let the image
  backend figure out and return the target location.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346494] [NEW] l3 agent gw port missing vlan tag for vlan provider network

2014-07-21 Thread Robert Collins
Public bug reported:

Hi, I have a provider network with my floating NAT range on it and a vlan 
segmentation id:
neutron net-show ext-net
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| f8ea424f-fcbe-4d57-9f17-5c576bf56e60 |
| name  | ext-net  |
| provider:network_type | vlan |
| provider:physical_network | datacentre   |
| provider:segmentation_id  | 25   |
| router:external   | True |
| shared| False|
| status| ACTIVE   |
| subnets   | 391829e1-afc5-4280-9cd9-75f554315e82 |
| tenant_id | e23f57e1d6c54398a68354adf522a36d |
+---+--+

My ovs agent config:

cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini 
[DATABASE]
sql_connection = mysql://.@localhost/ovs_neutron?charset=utf8

reconnect_interval = 2

[OVS]
bridge_mappings = datacentre:br-ex
network_vlan_ranges = datacentre

tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.10.16.151


[AGENT]
polling_interval = 2

[SECURITYGROUP]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
root@ci-overcloud-controller0-ydt5on7wojsb:~# 

But, the thing is, the port created in ovs is missing the tag:
Bridge br-ex
Port "qg-d8c27507-14"
Interface "qg-d8c27507-14"
type: internal

And we (As expected) are seeing tagged frames in tcpdump:
19:37:16.107288 20:fd:f1:b6:f5:16 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
(0x8100), length 68: vlan 25, p 0, ethertype ARP, Request who-has 138.35.77.67 
tell 138.35.77.1, length 50

rather than untagged frames for the vlan 25.

Running ovs-vsctl set port qg-d8c27507-14 tag=25 makes things work, but
the agent should do this, no?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346494

Title:
  l3 agent gw port missing vlan tag for vlan provider network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi, I have a provider network with my floating NAT range on it and a vlan 
segmentation id:
  neutron net-show ext-net
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| f8ea424f-fcbe-4d57-9f17-5c576bf56e60 |
  | name  | ext-net  |
  | provider:network_type | vlan |
  | provider:physical_network | datacentre   |
  | provider:segmentation_id  | 25   |
  | router:external   | True |
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 391829e1-afc5-4280-9cd9-75f554315e82 |
  | tenant_id | e23f57e1d6c54398a68354adf522a36d |
  +---+--+

  My ovs agent config:

  cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini 
  [DATABASE]
  sql_connection = mysql://.@localhost/ovs_neutron?charset=utf8

  reconnect_interval = 2

  [OVS]
  bridge_mappings = datacentre:br-ex
  network_vlan_ranges = datacentre

  tenant_network_type = gre
  tunnel_id_ranges = 1:1000
  enable_tunneling = True
  integration_bridge = br-int
  tunnel_bridge = br-tun
  local_ip = 10.10.16.151

  
  [AGENT]
  polling_interval = 2

  [SECURITYGROUP]
  firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  root@ci-overcloud-controller0-ydt5on7wojsb:~# 

  But, the thing is, the port created in ovs is missing the tag:
  Bridge br-ex
  Port "qg-d8c27507-14"
  Interface "qg-d8c27507-14"
  type: internal

  And we (As expected) are seeing tagged frames in tcpdump:
  19:37:16.107288 20:fd:f1:b6:f5:16 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
(0x8100), length 68: vlan 25, p 0, ethertype ARP, Request who-has 138.35.77.67 
tell 138.35.77.1, length 50

  rather than unt

[Yahoo-eng-team] [Bug 1346463] [NEW] Glance registry needs notifications config after using oslo.messaging

2014-07-21 Thread nikhil komawar
Public bug reported:

A good example of this use case is
https://review.openstack.org/#/c/107594 where the notifications need to
be added to the sample config file provided in Glance so that, the patch
would work with devstack.

However, g-reg does not send any notifications so, we need to either
find a way to remove the need for them or merge the g-api and g-reg
configs to avoid this situation.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1346463

Title:
  Glance registry needs notifications config after using oslo.messaging

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  A good example of this use case is
  https://review.openstack.org/#/c/107594 where the notifications need
  to be added to the sample config file provided in Glance so that, the
  patch would work with devstack.

  However, g-reg does not send any notifications so, we need to either
  find a way to remove the need for them or merge the g-api and g-reg
  configs to avoid this situation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1346463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346444] [NEW] DB migrations need unit tests

2014-07-21 Thread Henry Gessau
Public bug reported:

Now that the DB healing https://review.openstack.org/96438 is merged,
the DB migrations need unit tests.

** Affects: neutron
 Importance: Medium
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: In Progress


** Tags: db

** Tags added: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346444

Title:
  DB migrations need unit tests

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Now that the DB healing https://review.openstack.org/96438 is merged,
  the DB migrations need unit tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346389] [NEW] Disk unit missing in Overview Usage Summary

2014-07-21 Thread Martin Hickey
Public bug reported:

In Project and Admin Overview pages, the Disk column in the Usage
Summary table does not contain a unit.

It would be better to add the unit for clarity.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1346389

Title:
  Disk unit missing in Overview Usage Summary

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Project and Admin Overview pages, the Disk column in the Usage
  Summary table does not contain a unit.

  It would be better to add the unit for clarity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1346389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346201] Re: v2.0/tenants fails to report the 'extra' field values in the project

2014-07-21 Thread Morgan Fainberg
This is actually not a bug, but the intended mechanism. All extra values
are handled as part of the returned object (at the top level). They are
only stored in backing store as a JSON blob. The intention is that if
you add the 'test' attribute to your tenant it would be returned at the
top as .test

The example you provided as the expected result would not work as there is no 
way to reference the 'extra' fields, it would need to be:
{
"description": null,
"enabled": true,
"id": "dc0a88dc51624167b747211ec050a08e",
"extra": {
   "test": "value"
 },
"name": "admin"
}

This would be an incompatible API change for both v2.0 and v3.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1346201

Title:
  v2.0/tenants fails to report the 'extra' field values in the project

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Assume that, the project "admin" is created in the keystone db with
  "extra" filed having following values:

  {"test":"value"}

  Then on running GET on the REST API v2.0/tenants, returns the
  following response

  {
  "description": null,
  "enabled": true,
  "id": "dc0a88dc51624167b747211ec050a08e",
  "test": "value",  -- ERROR
  "name": "admin"
  }

  This should be returning the response as follows:

  {
  "description": null,
  "enabled": true,
  "id": "dc0a88dc51624167b747211ec050a08e",
  {
 "test": "value"
  },  < -- JSON String
  "name": "admin"
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1346201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346386] [NEW] Containers with unicode names can't switch from private to public

2014-07-21 Thread George Peristerakis
Public bug reported:

In Horizon's containers' tab, create a container with the name "Cédric:"
and the container access to private. After creating the Access is set to
private. When clicking on More -> Make Public, Horizon returns a success
message, but the access remains private.

** Affects: horizon
 Importance: Undecided
 Assignee: George Peristerakis (george-peristerakis)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => George Peristerakis (george-peristerakis)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1346386

Title:
  Containers with unicode names can't switch from private to public

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Horizon's containers' tab, create a container with the name
  "Cédric:" and the container access to private. After creating the
  Access is set to private. When clicking on More -> Make Public,
  Horizon returns a success message, but the access remains private.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1346386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346385] [NEW] libvirt: live migration stopped working after shared storage pacthes

2014-07-21 Thread Vladik Romanovsky
Public bug reported:

Live migration has stopped working with NFS shared storage, after this patch 
has been submitted:
https://review.openstack.org/#/c/91722, as one of the new checks doesn't take 
into account the file based shared storage.

2014-07-17 16:14:14.276 50785 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: local variable 'instance_dir' referenced before 
assignment
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 393, in 
decorated_function
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher payload)
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 303, in 
decorated_function
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 290, in 
decorated_function
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4465, in 
pre_live_migration
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher 
migrate_data)
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4585, in 
pre_live_migration
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher 
self._create_images_and_backing(context, instance, instance_dir,
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher 
UnboundLocalError: local variable 'instance_dir' referenced before assignment
2014-07-17 16:14:14.276 50785 TRACE oslo.messaging.rpc.dispatcher
2014-07-17 16:14:14.280 50785 ERROR oslo.messaging._drivers.common [-] 
Returning exception local variable 'instance_dir' referenced before assignment 
to caller
2014-07-17 16:14:14.280 50785 ERROR oslo.messaging._drivers.common [-] 
['Traceback (most recent call last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line

** Affects: nova
 Importance: Undecided
 Assignee: Vladik Romanovsky (vladik-romanovsky)
 Status: New


** Tags: compute libvirt

** Changed in: nova
 Assignee: (unassigned) => Vladik Romanovsky (vladik-romanovsky)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346385

Title:
  libvirt: live migration stopped working after shared storage pacthes

Status in OpenStack Compute (Nova):
  New

Bug description:
  Live migration has stopped working with NFS shared storage, after t

[Yahoo-eng-team] [Bug 1346372] [NEW] The default value of quota_firewall_rule should not be -1

2014-07-21 Thread Liping Mao
Public bug reported:

the default value of "quota_firewall_rule" is "-1", and this means unlimited. 
There will be potential security issue if openstack admin do not modify this 
default value. 
A bad tenant User can create unlimited firewall rules to "attack" network node, 
in the backend, we will have a large number of iptables rules. This will make 
the network node crash or very slow.

So I suggest we use another number but not "-1" here.

** Affects: neutron
 Importance: Undecided
 Assignee: Liping Mao (limao)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Liping Mao (limao)

** Description changed:

- the default value of "quota_firewall_rule" is "-1", and this means
- unlimited. There will be potential security issue if openstack admin do
- not modify this default value. Tenant User can create unlimited firewall
- rules , in the backend, we will have many iptables rules. This may make
- the network node crash or very slow.
+ the default value of "quota_firewall_rule" is "-1", and this means unlimited. 
There will be potential security issue if openstack admin do not modify this 
default value. 
+ A bad tenant User can create unlimited firewall rules to "attack" network 
node, in the backend, we will have a large number of iptables rules. This will 
make the network node crash or very slow.
  
  So I suggest we use another number but not "-1" here.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346372

Title:
  The default value of quota_firewall_rule should not be -1

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  the default value of "quota_firewall_rule" is "-1", and this means unlimited. 
There will be potential security issue if openstack admin do not modify this 
default value. 
  A bad tenant User can create unlimited firewall rules to "attack" network 
node, in the backend, we will have a large number of iptables rules. This will 
make the network node crash or very slow.

  So I suggest we use another number but not "-1" here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326901] Re: ServiceBinaryExists - binary for nova-conductor already exists

2014-07-21 Thread James Page
** Package changed: ubuntu => nova (Ubuntu)

** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Utopic)
   Importance: Undecided
 Assignee: Corey Bryant (corey.bryant)
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326901

Title:
  ServiceBinaryExists - binary for nova-conductor already exists

Status in OpenStack Compute (Nova):
  Fix Committed
Status in “nova” package in Ubuntu:
  New
Status in “nova” source package in Trusty:
  New
Status in “nova” source package in Utopic:
  New

Bug description:
  We're hitting an intermittent issue where ServiceBinaryExists is
  raised for nova-conductor on deployment.

  From nova-conductor's upstart log ( /var/log/upstart/nova-
  conductor.log ):

  2014-05-15 12:02:25.206 34494 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _periodic_update_dns because its interval is negative
  2014-05-15 12:02:25.241 34494 INFO nova.openstack.common.service [-] Starting 
8 workers
  2014-05-15 12:02:25.242 34494 INFO nova.openstack.common.service [-] Started 
child 34501
  2014-05-15 12:02:25.244 34494 INFO nova.openstack.common.service [-] Started 
child 34502
  2014-05-15 12:02:25.246 34494 INFO nova.openstack.common.service [-] Started 
child 34503
  2014-05-15 12:02:25.246 34501 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.247 34502 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.247 34494 INFO nova.openstack.common.service [-] Started 
child 34504
  2014-05-15 12:02:25.249 34503 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.251 34504 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.254 34505 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.250 34494 INFO nova.openstack.common.service [-] Started 
child 34505
  2014-05-15 12:02:25.261 34494 INFO nova.openstack.common.service [-] Started 
child 34506
  2014-05-15 12:02:25.263 34494 INFO nova.openstack.common.service [-] Started 
child 34507
  2014-05-15 12:02:25.266 34494 INFO nova.openstack.common.service [-] Started 
child 34508
  2014-05-15 12:02:25.267 34507 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.268 34506 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.271 34508 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  
/usr/lib/python2.7/dist-packages/nova/openstack/common/db/sqlalchemy/session.py:379:
 DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
match = pattern.match(integrity_error.message)
  
/usr/lib/python2.7/dist-packages/nova/openstack/common/db/sqlalchemy/session.py:379:
 DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
match = pattern.match(integrity_error.message)
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 346, in 
fire_timers
  timer()
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 56, in 
__call__
  cb(*args, **kw)
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, 
in main
  2014-05-15 12:02:25.862 34502 ERROR oslo.messaging._drivers.impl_rabbit [-] 
AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying 
again in 1 seconds.
  result = function(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", 
line 480, in run_service
  service.start()
File "/usr/lib/python2.7/dist-packages/nova/service.py", line 172, in start
  self.service_ref = self._create_service_ref(ctxt)
File "/usr/lib/python2.7/dist-packages/nova/service.py", line 224, in 
_create_service_ref
  service = self.conductor_api.service_create(context, svc_values)
File "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 202, in 
service_create
  return self._manager.service_create(context, values)
File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 966, in wrapper
  return func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 
461, in service_create
  svc = self.db.service_create(context, values)
File "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 139, in 
service_create
  return IMPL.service_create(context, values)
File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 
146, in wrapper
  return f(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 
521, in service_create
  binary=values.get('binary'))
  ServiceBinaryExists: Service with host glover binary nova-conductor exists.
  

[Yahoo-eng-team] [Bug 1346327] [NEW] libvirt swap_volume and live_snapshot methods have no unit tests

2014-07-21 Thread Daniel Berrange
Public bug reported:

Neither the swap_volume or live_snapshot methods in the libvirt driver
have any corresponding coverage in the test_driver.py unit test suite.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346327

Title:
  libvirt swap_volume and live_snapshot methods have no unit tests

Status in OpenStack Compute (Nova):
  New

Bug description:
  Neither the swap_volume or live_snapshot methods in the libvirt driver
  have any corresponding coverage in the test_driver.py unit test suite.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346245] [NEW] Incorrect downgrade in migration b7a8863760e_rm_cisco_vlan_bindin

2014-07-21 Thread Ann Kamyshnikova
Public bug reported:

Downgrade in migration b7a8863760e_rm_cisco_vlan_bindin fails
http://paste.openstack.org/show/87396/

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: cisco db

** Changed in: neutron
 Assignee: (unassigned) => Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346245

Title:
  Incorrect downgrade in migration b7a8863760e_rm_cisco_vlan_bindin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Downgrade in migration b7a8863760e_rm_cisco_vlan_bindin fails
  http://paste.openstack.org/show/87396/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346210] [NEW] keystone v2.0 API docs reported with invalid information

2014-07-21 Thread Kanagaraj Manickam
Public bug reported:

The keystone API page http://developer.openstack.org/api-ref-
identity-v2.html is reporting the following statements, meaning that, it
generates the token only for the "compute API"

Get an authentication token that permits access to the Compute API.

This line should be changed to following line

Get an authentication token that permits access to the Openstack Service
API.

Here, the word "Compute" is to be changed to "Openstack Service"

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1346210

Title:
  keystone v2.0 API docs reported with invalid information

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The keystone API page http://developer.openstack.org/api-ref-
  identity-v2.html is reporting the following statements, meaning that,
  it generates the token only for the "compute API"

  Get an authentication token that permits access to the Compute API.

  This line should be changed to following line

  Get an authentication token that permits access to the Openstack
  Service API.

  Here, the word "Compute" is to be changed to "Openstack Service"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1346210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346211] [NEW] keystone under apache can't handle request chunking

2014-07-21 Thread David Patterson
Public bug reported:

Method to reproduce stack trace for apache config with chunking enabled


1.  If using devstack, configure it to enable chunking in apache, by
adding "WSGIChunkedRequest On" in the file devstack/files/apache-
keystone.template (in both sections) and start devstack

2.  Run from the command line:

openstack -v --debug server list  (or any simple command)

3.  Look in the output for the very first "curl" command that looks
something like this:

curl -i --insecure -X POST http://192.168.51.21:5000/v2.0/tokens -H
"Content-Type: application/json" -H "Accept: application/json" -H "User-
Agent: python-keystoneclient" -d '{"auth": {"tenantName": "demo",
"passwordCredentials": {"username": "demo", "password": "admin"}}}'

4.  The above "curl" command succeeds.  Modify it to use chunking by
adding -H "Transfer-Encoding: chunked" and run it:

curl -i --insecure -X POST http://192.168.51.21:5000/v2.0/tokens -H
"Content-Type: application/json" -H "Accept: application/json" -H "User-
Agent: python-keystoneclient" -H "Transfer-Encoding: chunked" -d
'{"auth": {"tenantName": "demo", "passwordCredentials": {"username":
"demo", "password": "admin"}}}'

5.  You will get a cli error:

HTTP/1.1 500 Internal Server Error
Date: Wed, 16 Jul 2014 19:01:58 GMT
Server: Apache/2.2.22 (Ubuntu)
Vary: X-Auth-Token
Content-Length: 215
Connection: close
Content-Type: application/json

{"error": {"message": "An unexpected error prevented the server from
fulfilling your request: request data read error (Disable debug mode to
suppress these details.)", "code": 500, "title": "Internal Server
Error"}}

6.  The keystone log will show a stack trace:

[Wed Jul 16 19:02:43 2014] [error] 13693 ERROR keystone.common.wsgi [-] request 
data read error
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi Traceback 
(most recent call last):
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 414, in __call__
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi 
response = self.process_request(request)
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/middleware/core.py", line 112, in process_request
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi 
params_json = request.body
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 677, in 
_body__get
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi 
self.make_body_seekable() # we need this to have content_length
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 922, in 
make_body_seekable
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi 
self.copy_body()
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 938, in 
copy_body
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi 
self.body = self.body_file_raw.read()
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/utils.py", line 306, in read
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi result 
= self.data.read()
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi IOError: 
request data read error
[Wed Jul 16 19:02:43 2014] [error] 13693 TRACE keystone.common.wsgi

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1346211

Title:
  keystone under apache can't handle request chunking

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Method to reproduce stack trace for apache config with chunking enabled
  

  1.  If using devstack, configure it to enable chunking in apache, by
  adding "WSGIChunkedRequest On" in the file devstack/files/apache-
  keystone.template (in both sections) and start devstack

  2.  Run from the command line:

  openstack -v --debug server list  (or any simple command)

  3.  Look in the output for the very first "curl" command that looks
  something like this:

  curl -i --insecure -X POST http://192.168.51.21:5000/v2.0/tokens -H
  "Content-Type: application/json" -H "Accept: application/json" -H
  "User-Agent: python-keystoneclient" -d '{"auth": {"tenantName":
  "demo", "passwordCredentials": {"username": "demo", "password":
  "admin"}}}'

  4.  The above "curl" command succeeds.  Modify it to use chunking by
  adding -H "Transfer-Encoding: chunked" and run it:

  curl -i --insecure -X POST http://

[Yahoo-eng-team] [Bug 1346207] [NEW] v2.0/tenants/{tenantId} reports 404

2014-07-21 Thread Kanagaraj Manickam
Public bug reported:

On running GET on the v2.0/tenants returns two projects as follows:
{
"tenants_links": [],
"tenants": [
{
"description": null,
"enabled": true,
"id": "0f90bb31a01c453f851a5ca4e5e178ab",
"name": "demo"
},
{
"description": null,
"enabled": true,
"id": "dc0a88dc51624167b747211ec050a08e",
"test": "value",
"name": "admin"
}
]
}

Then , on running v2.0/tenants/0f90bb31a01c453f851a5ca4e5e178ab reports
404.

But it should be returning the project "demo" details

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1346207

Title:
  v2.0/tenants/{tenantId} reports 404

Status in OpenStack Identity (Keystone):
  New

Bug description:
  On running GET on the v2.0/tenants returns two projects as follows:
  {
  "tenants_links": [],
  "tenants": [
  {
  "description": null,
  "enabled": true,
  "id": "0f90bb31a01c453f851a5ca4e5e178ab",
  "name": "demo"
  },
  {
  "description": null,
  "enabled": true,
  "id": "dc0a88dc51624167b747211ec050a08e",
  "test": "value",
  "name": "admin"
  }
  ]
  }

  Then , on running v2.0/tenants/0f90bb31a01c453f851a5ca4e5e178ab
  reports 404.

  But it should be returning the project "demo" details

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1346207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346201] [NEW] v2.0/tenants fails to report the 'extra' field values in the project

2014-07-21 Thread Kanagaraj Manickam
Public bug reported:

Assume that, the project "admin" is created in the keystone db with
"extra" filed having following values:

{"test":"value"}

Then on running GET on the REST API v2.0/tenants, returns the following
response

{
"description": null,
"enabled": true,
"id": "dc0a88dc51624167b747211ec050a08e",
"test": "value",  -- ERROR
"name": "admin"
}

This should be returning the response as follows:

{
"description": null,
"enabled": true,
"id": "dc0a88dc51624167b747211ec050a08e",
{
   "test": "value"
},  < -- JSON String
"name": "admin"
}

** Affects: keystone
 Importance: Undecided
 Assignee: Kanagaraj Manickam (kanagaraj-manickam)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Kanagaraj Manickam (kanagaraj-manickam)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1346201

Title:
  v2.0/tenants fails to report the 'extra' field values in the project

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Assume that, the project "admin" is created in the keystone db with
  "extra" filed having following values:

  {"test":"value"}

  Then on running GET on the REST API v2.0/tenants, returns the
  following response

  {
  "description": null,
  "enabled": true,
  "id": "dc0a88dc51624167b747211ec050a08e",
  "test": "value",  -- ERROR
  "name": "admin"
  }

  This should be returning the response as follows:

  {
  "description": null,
  "enabled": true,
  "id": "dc0a88dc51624167b747211ec050a08e",
  {
 "test": "value"
  },  < -- JSON String
  "name": "admin"
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1346201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346191] [NEW] libvirt _live_snapshot & _swap_volume functions re-define guest with wrong XML document

2014-07-21 Thread Daniel Berrange
Public bug reported:

In the nova/virt/libvirt/driver.py file, the '_live_snapshot' and
'_swap_volume' methods have the following code flow


  xml = dom.XMLDesc(0)

  dom.undefine()

  dom.blockRebase()

  dom.defineXML(xml)


The reason for this is that 'blockRebase' requires the guest to be transient, 
so we must temporarily delete the persistent config and then re-create it later.

Unfortunately this code is using the wrong XML document when re-creating
the persistent config.  'dom.XMLDesc(0)' will return the guest XML
document based on the current guest state. Since the guest is running in
both these cases, it will get getting the *live* XML instead of the
persistent XML.

So these methods are deleting the persistent XML and replacing it with
the live XML. These two different XML documents are not guaranteed to
contain the same information.

As a second problem, it is not requesting inclusion of security
information, so any SPICE/VNC password set in the persistent XML is
getting lost

The fix is to replace

  dom.XMLDesc(0)

with

  dom.XMLDesc(libvirt.VIR_DOMAIN_XML_INACTIVE |
   libvirt.VIR_DOMAIN_XML_SECURE)

in the _live_snapshot and _swap_volume functions.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346191

Title:
  libvirt _live_snapshot & _swap_volume functions re-define guest with
  wrong XML document

Status in OpenStack Compute (Nova):
  New

Bug description:
  In the nova/virt/libvirt/driver.py file, the '_live_snapshot' and
  '_swap_volume' methods have the following code flow

  
xml = dom.XMLDesc(0)

dom.undefine()

dom.blockRebase()

dom.defineXML(xml)

  
  The reason for this is that 'blockRebase' requires the guest to be transient, 
so we must temporarily delete the persistent config and then re-create it later.

  Unfortunately this code is using the wrong XML document when re-
  creating the persistent config.  'dom.XMLDesc(0)' will return the
  guest XML document based on the current guest state. Since the guest
  is running in both these cases, it will get getting the *live* XML
  instead of the persistent XML.

  So these methods are deleting the persistent XML and replacing it with
  the live XML. These two different XML documents are not guaranteed to
  contain the same information.

  As a second problem, it is not requesting inclusion of security
  information, so any SPICE/VNC password set in the persistent XML is
  getting lost

  The fix is to replace

dom.XMLDesc(0)

  with

dom.XMLDesc(libvirt.VIR_DOMAIN_XML_INACTIVE |
 libvirt.VIR_DOMAIN_XML_SECURE)

  in the _live_snapshot and _swap_volume functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346148] [NEW] Viewing console while launching a VDI instances causes error status s error status

2014-07-21 Thread Tzach Shefi
Public bug reported:

Description of problem: If you access an instance's console while
launching an a VDI image, instance status: error. Ran this a few times
if however you wait until instance has completed booting before
accessing console it boots up fine. This doesn't happen with Cirros
QCOW.

VDI image used, must be uncompressed:
http://downloads.sourceforge.net/virtualboximage/dsl-4.2.5-x86.7z

Version-Release number of selected component (if applicable):
RHEL 6.5 
openstack-nova-compute-2014.1.1-2.el6ost.noarch

How reproducible:
Every time, tested three times.

Steps to Reproduce:
1. In my case Glance uses a Gluster share, don't think it's related.
2. Upload VDI image to Glance.
3. Launch an instance from VDI image.
4. Before instances completes boot process, go to instance's console.
5. Instance's failed to boot, status error. 
6. Repeat above steps, this time wait for instance to complete boot process 
then go to console, status is active. 

Actual results:

Failed to boot VDI instance, when accessing instance's console before
instance is active.

On compute.log instance ID  13b23f83-9587-4068-a6d1-d2b8034d0e80 status
error, while instance ID 542bfd5e-866e-4af4-ab59-f1ecb86bfa2f booted up
fine.

Expected results:

VDI instance should boot up even if you look at console during boot
process, just like Cirros qcow.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "Compute log"
   
https://bugs.launchpad.net/bugs/1346148/+attachment/4158850/+files/compute.log.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346148

Title:
  Viewing console while launching a VDI instances causes error status s
  error status

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem: If you access an instance's console while
  launching an a VDI image, instance status: error. Ran this a few times
  if however you wait until instance has completed booting before
  accessing console it boots up fine. This doesn't happen with Cirros
  QCOW.

  VDI image used, must be uncompressed:
  http://downloads.sourceforge.net/virtualboximage/dsl-4.2.5-x86.7z

  Version-Release number of selected component (if applicable):
  RHEL 6.5 
  openstack-nova-compute-2014.1.1-2.el6ost.noarch

  How reproducible:
  Every time, tested three times.

  Steps to Reproduce:
  1. In my case Glance uses a Gluster share, don't think it's related.
  2. Upload VDI image to Glance.
  3. Launch an instance from VDI image.
  4. Before instances completes boot process, go to instance's console.
  5. Instance's failed to boot, status error. 
  6. Repeat above steps, this time wait for instance to complete boot process 
then go to console, status is active. 

  Actual results:

  Failed to boot VDI instance, when accessing instance's console before
  instance is active.

  On compute.log instance ID  13b23f83-9587-4068-a6d1-d2b8034d0e80
  status error, while instance ID 542bfd5e-866e-4af4-ab59-f1ecb86bfa2f
  booted up fine.

  Expected results:

  VDI instance should boot up even if you look at console during boot
  process, just like Cirros qcow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346121] [NEW] Help text for neutron router-create lists 'distributed' as a positional argument

2014-07-21 Thread Assaf Muller
Public bug reported:

With the following patch merged:
https://review.openstack.org/#/c/106147/

neutron router-create -h
usage: neutron router-create [-h] [-f {shell,table,value}] [-c COLUMN]
 [--variable VARIABLE] [--prefix PREFIX]
 [--request-format {json,xml}]
 [--tenant-id TENANT_ID] [--admin-state-down]
 NAME

Create a router for a given tenant.

positional arguments:
  NAME  Name of router to create.
  distributed   Create a distributed router.

The distributed flag doesn't appear in the usage block, but appears as a
positional argument when it is not.

neutron router-create r1 distributed doesn't work, while neutron router-
create r1 --distributed does.

To summarize:
distributed shouldn't appear in the positional arguments list, but should 
instead appear in the usage block.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: cli neutron-client

** Tags added: cli neutron-client

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346121

Title:
  Help text for neutron router-create lists 'distributed' as a
  positional argument

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  With the following patch merged:
  https://review.openstack.org/#/c/106147/

  neutron router-create -h
  usage: neutron router-create [-h] [-f {shell,table,value}] [-c COLUMN]
   [--variable VARIABLE] [--prefix PREFIX]
   [--request-format {json,xml}]
   [--tenant-id TENANT_ID] [--admin-state-down]
   NAME

  Create a router for a given tenant.

  positional arguments:
NAME  Name of router to create.
distributed   Create a distributed router.

  The distributed flag doesn't appear in the usage block, but appears as
  a positional argument when it is not.

  neutron router-create r1 distributed doesn't work, while neutron
  router-create r1 --distributed does.

  To summarize:
  distributed shouldn't appear in the positional arguments list, but should 
instead appear in the usage block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346108] [NEW] nova doesn't release network when network_info is not equal to 1

2014-07-21 Thread Eli Qiao
Public bug reported:

nova 
[tagett@stack-01 devstack]$ nova net-list 
+--+-+--+
| ID   | Label   | CIDR |
+--+-+--+
| d838074c-632c-43cf-8aae-24e17a2d1828 | public  | -|
| cc16209f-a7b4-44f6-8c8f-27ea7b0cc9ff | private | -|
| b745b2c6-db16-40ab-8ad7-af6da0e5e699 | private | -|
+--+-+--+

[tagett@stack-01 devstack]$ nova  --os-compute-api-version 2 interface-attach 
vm1 
ERROR (ClientException): Failed to attach interface (HTTP 500) (Request-ID: 
req-a2da4531-0f22-46b2-94a9-ac9a7b0c350f)

see from log

2014-07-21 17:06:16.543 ERROR nova.compute.manager [req-
375c0956-70f8-4d3a-b659-f48666787198 admin admin]
allocate_port_for_instance returned 2 ports

check nova show output

[tagett@stack-01 devstack]$ nova  --os-compute-api-version 2 show vm1 
+--+--+
| Property | Value  

  |
+--+--+
| OS-DCF:diskConfig| AUTO   

  |
| OS-EXT-AZ:availability_zone  | nova   

  |
| OS-EXT-SRV-ATTR:host | stack-01   

  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | stack-01   

  |
| OS-EXT-SRV-ATTR:instance_name| instance-0004  

  |
| OS-EXT-STS:power_state   | 1  

  |
| OS-EXT-STS:task_state| -  

  |
| OS-EXT-STS:vm_state  | active 

  |
| OS-SRV-USG:launched_at   | 2014-07-21T08:22:55.00 

  |
| OS-SRV-USG:terminated_at | -  

  |
| accessIPv4   |

  |
| accessIPv6   |

  |
| config_drive |

  |
| created  | 2014-07-21T08:22:45Z   

  |
| flavor   | m1.nano (42)   

  |
| hostId   | 
ce82402eef4265e64ca9544980007863759f58411f2afb81e1918d90

 |
| id   | 628b759d-d8ad-4646-b2c3-979151efdf83   

[Yahoo-eng-team] [Bug 1346092] [NEW] RBD helper utils in libvirt driver code need to be moved to separate module

2014-07-21 Thread Daniel Berrange
Public bug reported:

The libvirt imagebackend.py file has alot of helper APIs for dealing
with the RBD utilities. It is desirable that these all be isolated in a
standalone rbd.py file, to be called by the imagebackend.py  This will
make the core logic in imagebackend.py easier to follow and make the rbd
helpers easier to test.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346092

Title:
  RBD helper utils in libvirt driver code need to be moved to separate
  module

Status in OpenStack Compute (Nova):
  New

Bug description:
  The libvirt imagebackend.py file has alot of helper APIs for dealing
  with the RBD utilities. It is desirable that these all be isolated in
  a standalone rbd.py file, to be called by the imagebackend.py  This
  will make the core logic in imagebackend.py easier to follow and make
  the rbd helpers easier to test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262124] Re: Ceilometer cannot poll and publish floatingip samples

2014-07-21 Thread Liusheng
** Changed in: openstack-manuals
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262124

Title:
  Ceilometer cannot poll and publish floatingip samples

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Manuals:
  Invalid
Status in Python client library for Nova:
  Fix Released

Bug description:
  The ceilometer central agent pull and pubulish floatingip samples or other 
types of samples .but it cannot get valid samples of floatingip.
  The reason is ceilometer floatingip poster call nova API  "list" metod of 
nova.api.openstack.compute.contrib.floating_ips.FloatingIPController, this API 
get floatingips filtered by context.project_id.

  The current context.project_id is the id of tenant "service".So,the
  result is {"floatingips": []}

  the logs of nova-api-os-compute is:

  http://paste.openstack.org/show/55285/

  Here,ceilometer invoke novaclient to list floatingips,and novaclient call 
nova API,then,the nova API will call nova network API or neutron API with:
  client.list_floatingips(tenant_id=project_id)['floatingips']

  Novaclient can not list other tenant's floatingip but only the tenant
  of current context.

  So, I think we should modify the nova API with adding a parameter like
  "all_tenant" which accessed by admin role.

  This should be confirmed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1262124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp