[Yahoo-eng-team] [Bug 1274427] [NEW] Instance list

2014-01-30 Thread Maithem
Public bug reported:

When there are more than 20 instances, there is only a next button to
see the next 20 instances, but there isn't a back button, so you cant
see the last 20 instance.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1274427

Title:
  Instance list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When there are more than 20 instances, there is only a next button to
  see the next 20 instances, but there isn't a back button, so you cant
  see the last 20 instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1274427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274439] [NEW] VMware: operations against VC/ESX are taking at least 5 seconds

2014-01-30 Thread Gary Kotton
Public bug reported:

The default values of the task_poll_interval is 5 seconds. This means that any 
operation against the VC will wait at least 5 seconds.
An example of this - a spawn operation would take on average take 25 seconds. 
When this parameter what changed to 0.2 - 1.0 seconds the operation would take 
on average 9 seconds.

** Affects: nova
 Importance: High
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Gary Kotton (garyk)

** Changed in: nova
Milestone: None = icehouse-3

** Changed in: nova
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274439

Title:
  VMware: operations against VC/ESX are taking at least 5 seconds

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The default values of the task_poll_interval is 5 seconds. This means that 
any operation against the VC will wait at least 5 seconds.
  An example of this - a spawn operation would take on average take 25 seconds. 
When this parameter what changed to 0.2 - 1.0 seconds the operation would take 
on average 9 seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274457] [NEW] VM instance is not automaticaly removed from table after completing the deletion of an instance that was in error state

2014-01-30 Thread Gustavo Knüppe
Public bug reported:

When you terminate an instance that is in error state, Horizon scheduled
the task to terminate instance and shows deleting.. description on
Task column but no auto-refresh occurs.. as a result, the instance is
not removed from the table after completing the deletion. A manual page
refresh is required to remove the instance from the Table.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1274457

Title:
  VM instance is not automaticaly removed from table after completing
  the deletion of an instance that was in error state

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When you terminate an instance that is in error state, Horizon
  scheduled the task to terminate instance and shows deleting..
  description on Task column but no auto-refresh occurs.. as a result,
  the instance is not removed from the table after completing the
  deletion. A manual page refresh is required to remove the instance
  from the Table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1274457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274487] [NEW] neutron-metadata-agent incorrectly passes keystone token to neutronclient

2014-01-30 Thread Ihar Hrachyshka
Public bug reported:

When instantiating a neutron client, the agent passes keystone token to
object __init__ as auth_token= keyword argument, while neutronclient
expects token=. This results in extensive interaction with keystone on
cloud-init service startup because each request from an instance to
metadata agent results in new token request.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274487

Title:
  neutron-metadata-agent incorrectly passes keystone token to
  neutronclient

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When instantiating a neutron client, the agent passes keystone token
  to object __init__ as auth_token= keyword argument, while
  neutronclient expects token=. This results in extensive interaction
  with keystone on cloud-init service startup because each request from
  an instance to metadata agent results in new token request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270845] Re: nova-api-metadata - refused to start due to missing fake_network configuration

2014-01-30 Thread Chuck Short
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270845

Title:
  nova-api-metadata - refused to start due to missing fake_network
  configuration

Status in devstack - openstack dev environments:
  Triaged
Status in OpenStack Compute (Nova):
  Confirmed
Status in “nova” package in Ubuntu:
  New
Status in “nova” source package in Trusty:
  New

Bug description:
  nova from trunk testing packages; the nova-api-metadata service fails
  on start:

  2014-01-20 14:22:04.593 4291 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
  2014-01-20 14:22:04.598 4291 CRITICAL nova [-] no such option: fake_network
  2014-01-20 14:22:04.598 4291 TRACE nova Traceback (most recent call last):
  2014-01-20 14:22:04.598 4291 TRACE nova   File /usr/bin/nova-api-metadata, 
line 10, in module
  2014-01-20 14:22:04.598 4291 TRACE nova sys.exit(main())
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cmd/api_metadata.py, line 48, in main
  2014-01-20 14:22:04.598 4291 TRACE nova server = 
service.WSGIService('metadata', use_ssl=should_use_ssl)
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 339, in __init__
  2014-01-20 14:22:04.598 4291 TRACE nova self.manager = self._get_manager()
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 375, in _get_manager
  2014-01-20 14:22:04.598 4291 TRACE nova return manager_class()
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/api/manager.py, line 32, in __init__
  2014-01-20 14:22:04.598 4291 TRACE nova 
self.network_driver.metadata_accept()
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 658, in 
metadata_accept
  2014-01-20 14:22:04.598 4291 TRACE nova iptables_manager.apply()
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 426, in apply
  2014-01-20 14:22:04.598 4291 TRACE nova self._apply()
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line 
249, in inner
  2014-01-20 14:22:04.598 4291 TRACE nova return f(*args, **kwargs)
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 446, in 
_apply
  2014-01-20 14:22:04.598 4291 TRACE nova attempts=5)
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 1196, in 
_execute
  2014-01-20 14:22:04.598 4291 TRACE nova if CONF.fake_network:
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1648, in __getattr__
  2014-01-20 14:22:04.598 4291 TRACE nova raise NoSuchOptError(name)
  2014-01-20 14:22:04.598 4291 TRACE nova NoSuchOptError: no such option: 
fake_network
  2014-01-20 14:22:04.598 4291 TRACE nova

  We use this service on network gateway nodes alongside the neutron
  metadata proxy for scale-out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1270845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274523] [NEW] connection_trace does not work with DB2 backend

2014-01-30 Thread Matt Riedemann
Public bug reported:

When setting connection_trace=True, the stack trace does not get printed
for DB2 (ibm_db).

I have a patch that we've been using internally for this fix that I plan
to upstream soon, and with that we can get output like this:

2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] SELECT 
services.created_at AS services_created_at, services.updated_at AS 
services_updated_at, services.deleted_at AS services_deleted_at, 
services.deleted AS services_deleted, services.id AS services_id, services.host 
AS services_host, services.binary AS services_binary, services.topic AS 
services_topic, services.report_count AS services_report_count, 
services.disabled AS services_disabled, services.disabled_reason AS 
services_disabled_reason
FROM services WHERE services.deleted = ? AND services.id = ? FETCH FIRST 1 ROWS 
ONLY
2013-09-11 13:07:51,985 INFO sqlalchemy.engine.base.Engine (0, 3)
2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] (0, 3)
File 
/usr/lib/python2.6/site-packages/nova/servicegroup/drivers/db.py:92 
_report_state() service.service_ref, state_catalog)
File /usr/lib/python2.6/site-packages/nova/conductor/api.py:270 
service_update() return self._manager.service_update(context, service, values)
File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py:420 
catch_client_exception() return func(*args, **kwargs)
File /usr/lib/python2.6/site-packages/nova/conductor/manager.py:461 
service_update() svc = self.db.service_update(context, service['id'], values)
File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:505 
service_update() with_compute_node=False, session=session)
File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:388 
_service_get() result = query.first()

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: heat
 Importance: Undecided
 Status: New

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo
 Importance: Undecided
 Assignee: Matt Riedemann (mriedem)
 Status: New


** Tags: db

** Changed in: oslo
 Assignee: (unassigned) = Matt Riedemann (mriedem)

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274523

Title:
  connection_trace does not work with DB2 backend

Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  When setting connection_trace=True, the stack trace does not get
  printed for DB2 (ibm_db).

  I have a patch that we've been using internally for this fix that I
  plan to upstream soon, and with that we can get output like this:

  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] SELECT 
services.created_at AS services_created_at, services.updated_at AS 
services_updated_at, services.deleted_at AS services_deleted_at, 
services.deleted AS services_deleted, services.id AS services_id, services.host 
AS services_host, services.binary AS services_binary, services.topic AS 
services_topic, services.report_count AS services_report_count, 
services.disabled AS services_disabled, services.disabled_reason AS 
services_disabled_reason
  FROM services WHERE services.deleted = ? AND services.id = ? FETCH FIRST 1 
ROWS ONLY
  2013-09-11 13:07:51,985 INFO sqlalchemy.engine.base.Engine (0, 3)
  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] (0, 3)
  File 
/usr/lib/python2.6/site-packages/nova/servicegroup/drivers/db.py:92 
_report_state() service.service_ref, state_catalog)
  File /usr/lib/python2.6/site-packages/nova/conductor/api.py:270 
service_update() return self._manager.service_update(context, service, values)
  File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py:420 
catch_client_exception() return func(*args, **kwargs)
  File /usr/lib/python2.6/site-packages/nova/conductor/manager.py:461 
service_update() svc = self.db.service_update(context, service['id'], values)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:505 
service_update() with_compute_node=False, session=session)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:388 

[Yahoo-eng-team] [Bug 1274536] [NEW] Neutron metadata agent can't handle high loads

2014-01-30 Thread Brian Haley
Public bug reported:

We've noticed that under high loads - 100's of VMs booting
simultaneously from multiple tenants, that the metadata agent simply
can't keep up, and the VMs fail to get their metadata.

Debug showed that the listen backlog was being overflowed, but changing
that manually only made things a little better.  It wasn't until
multiple threads were started that the problem went away.

Opening this bug to get these upstream since they help with the
scalability of Neutron.

1) Change the metadata-agent to support having multiple worker threads
2) Change it to support a configurable listen backlog

I'll assign this to myself and get the changes out for review.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274536

Title:
  Neutron metadata agent can't handle high loads

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We've noticed that under high loads - 100's of VMs booting
  simultaneously from multiple tenants, that the metadata agent simply
  can't keep up, and the VMs fail to get their metadata.

  Debug showed that the listen backlog was being overflowed, but
  changing that manually only made things a little better.  It wasn't
  until multiple threads were started that the problem went away.

  Opening this bug to get these upstream since they help with the
  scalability of Neutron.

  1) Change the metadata-agent to support having multiple worker threads
  2) Change it to support a configurable listen backlog

  I'll assign this to myself and get the changes out for review.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271102] Re: If live_migration failed, VM stay in state MIGRATING

2014-01-30 Thread Tiago Rodrigues de Mello
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271102

Title:
  If live_migration failed, VM stay in state MIGRATING

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  During the live_migration, if a InvalidSharedStorage is raised, the VM stay 
in MIGRATING state.
  The sequence of calls between services is the following (the request is a 
live_migration from Compute src to Compute dest):
  Scheduler (rpc call)- Compute dest : 
check_can_live_migrate_destination
  Compute dest (rpc call) - Compute src : check_can_live_migrate_source

  Exception InvalidSharedStorage raised by Compute src is deserialised by 
Compute dest as InvalidSharedStorage_Remote. 
  Exception InvalidSharedStorage_Remote raised by Compute dest is 
deserialised by Scheduler as RemoteError.
  So the rollback on status is not done by Scheduler

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274523] Re: connection_trace does not work with DB2 backend

2014-01-30 Thread Matt Riedemann
** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274523

Title:
  connection_trace does not work with DB2 backend

Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  When setting connection_trace=True, the stack trace does not get
  printed for DB2 (ibm_db).

  I have a patch that we've been using internally for this fix that I
  plan to upstream soon, and with that we can get output like this:

  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] SELECT 
services.created_at AS services_created_at, services.updated_at AS 
services_updated_at, services.deleted_at AS services_deleted_at, 
services.deleted AS services_deleted, services.id AS services_id, services.host 
AS services_host, services.binary AS services_binary, services.topic AS 
services_topic, services.report_count AS services_report_count, 
services.disabled AS services_disabled, services.disabled_reason AS 
services_disabled_reason
  FROM services WHERE services.deleted = ? AND services.id = ? FETCH FIRST 1 
ROWS ONLY
  2013-09-11 13:07:51,985 INFO sqlalchemy.engine.base.Engine (0, 3)
  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] (0, 3)
  File 
/usr/lib/python2.6/site-packages/nova/servicegroup/drivers/db.py:92 
_report_state() service.service_ref, state_catalog)
  File /usr/lib/python2.6/site-packages/nova/conductor/api.py:270 
service_update() return self._manager.service_update(context, service, values)
  File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py:420 
catch_client_exception() return func(*args, **kwargs)
  File /usr/lib/python2.6/site-packages/nova/conductor/manager.py:461 
service_update() svc = self.db.service_update(context, service['id'], values)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:505 
service_update() with_compute_node=False, session=session)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:388 
_service_get() result = query.first()

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1274523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1227321] Re: DBDuplicateEntry not being translated for DB2

2014-01-30 Thread Matt Riedemann
Apparently keystone handles this differently from oslo:

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/sql/core.py#n276

** Changed in: oslo
   Status: In Progress = New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1227321

Title:
  DBDuplicateEntry not being translated for DB2

Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Opinion
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  The
  
tempest.api.compute.keypairs.test_keypairs.test_create_keypair_with_duplicate_name
  test fails if you're running with a DB2 backend because the nova code
  is not currently translating the db integrity error if the backing
  engine is DB2 (ibm_db_sa) in
  nova.openstack.common.db.sqlalchemy.session._raise_if_duplicate_entry_error.

  Per full disclosure, nova is not claiming support for DB2 and there is
  a lot of work that would need to be done for that which my team is
  planning for icehouse and there is a blueprint here:

  https://blueprints.launchpad.net/nova/+spec/db2-database

  My team does have DB2 10.5 working with nova trunk but we have changes
  to the migration scripts to support that.  Also, you have to run with
  the DB2 patch for sqlalchemy-migrate posted here:

  https://code.google.com/p/sqlalchemy-migrate/issues/detail?id=151

  And you must run with the ibm-db/ibm-db-sa drivers:

  https://code.google.com/p/ibm-db/source/clones?repo=ibm-db-sa

  We're trying to get the sqlalchemy-migrate support for DB2 accepted in
  the icehouse timeframe but need to show the migrate maintainer that he
  can use the free express-c version of DB2 in ubuntu for the test
  backend.

  Anyway, having said all that, fixing the DBDuplicateEntry translation
  is part of the story so I'm opening a bug to track it and get the
  patch up to get the ball rolling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1227321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274581] [NEW] keystone ldap identity backend will not work without TLS_CACERT path specified in an ldap.conf file

2014-01-30 Thread Matt Fischer
Public bug reported:

I'm on Ubuntu 12.04 using havana 2013.2.1. What I've found is that the
LDAP identity backend for keystone will not talk to my LDAP server
(using ldaps) unless I have an ldap.conf that contains a TLS_CACERT
line. This line duplicates the setting of tls_cacertfile in my keystone
conf and therefore I don't see why it should be required. The rest of my
/etc/ldap/ldap.conf file is default/commented out. When I don't have
this line set I get a SERVER_DOWN error. I am using LDAP from a FreeIPA
server if that matters.

Error message from the logs:
2014-01-30 16:24:17.168 21174 TRACE keystone.common.wsgi SERVER_DOWN: {'info': 
'(unknown error code)', 'desc': Can't contact LDAP server}

and from the CLI:
Authorization Failed: An unexpected error prevented the server from fulfilling 
your request. {'info': '(unknown error code)', 'desc': Can't contact LDAP 
server} (HTTP 500)

Below are relevant sections of my configs:

/etc/ldap/ldap.conf:
#
# LDAP Defaults
#

# See ldap.conf(5) for details
# This file should be world readable but not world writable.

#BASE   dc=example,dc=com
#URIldap://ldap.example.com ldap://ldap-master.example.com:666

#SIZELIMIT  12
#TIMELIMIT  15
#DEREF  never

# TLS certificates (needed for GnuTLS)
TLS_CACERT  /etc/ssl/certs/ca-certificates.crt

-

keystone.conf:

[identity]
driver = keystone.identity.backends.ldap.Identity
...
[ldap]
url = ldaps://ldap.example.com:636
user = uid=mfischer,cn=users,cn=accounts,dc=example,dc=com
password = GoBroncos

...
use_tls = False
tls_cacertfile = /etc/ssl/certs/ca-certificates.crt
# tls_cacertdir =
tls_req_cert = demand

-

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1274581

Title:
  keystone ldap identity backend will not work without TLS_CACERT path
  specified in an ldap.conf file

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I'm on Ubuntu 12.04 using havana 2013.2.1. What I've found is that the
  LDAP identity backend for keystone will not talk to my LDAP server
  (using ldaps) unless I have an ldap.conf that contains a TLS_CACERT
  line. This line duplicates the setting of tls_cacertfile in my
  keystone conf and therefore I don't see why it should be required. The
  rest of my /etc/ldap/ldap.conf file is default/commented out. When I
  don't have this line set I get a SERVER_DOWN error. I am using LDAP
  from a FreeIPA server if that matters.

  Error message from the logs:
  2014-01-30 16:24:17.168 21174 TRACE keystone.common.wsgi SERVER_DOWN: 
{'info': '(unknown error code)', 'desc': Can't contact LDAP server}

  and from the CLI:
  Authorization Failed: An unexpected error prevented the server from 
fulfilling your request. {'info': '(unknown error code)', 'desc': Can't 
contact LDAP server} (HTTP 500)

  Below are relevant sections of my configs:

  /etc/ldap/ldap.conf:
  #
  # LDAP Defaults
  #

  # See ldap.conf(5) for details
  # This file should be world readable but not world writable.

  #BASE   dc=example,dc=com
  #URIldap://ldap.example.com ldap://ldap-master.example.com:666

  #SIZELIMIT  12
  #TIMELIMIT  15
  #DEREF  never

  # TLS certificates (needed for GnuTLS)
  TLS_CACERT  /etc/ssl/certs/ca-certificates.crt

  -

  keystone.conf:

  [identity]
  driver = keystone.identity.backends.ldap.Identity
  ...
  [ldap]
  url = ldaps://ldap.example.com:636
  user = uid=mfischer,cn=users,cn=accounts,dc=example,dc=com
  password = GoBroncos

  ...
  use_tls = False
  tls_cacertfile = /etc/ssl/certs/ca-certificates.crt
  # tls_cacertdir =
  tls_req_cert = demand

  -

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1274581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274611] [NEW] nova-network bridge setup fails if the interface address has 'dynamic' flag

2014-01-30 Thread Xavier Queralt
Public bug reported:

While setting the bridge up, if the network interface has a dynamic
address, the 'dynamic' flag will be displayed in the ip addr show
command:

[fedora@dev1 devstack]$ ip addr show dev eth0 scope global
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
link/ether 52:54:00:00:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.2/24 brd 192.168.122.255 scope global dynamic eth0
   valid_lft 2225sec preferred_lft 2225sec

When latter executing ip addr del with the IPv4 details, the 'dynamic'
flag is not accepted, causes the command to crash and leaves the bridge
half configured.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274611

Title:
  nova-network bridge setup fails if the interface address has 'dynamic'
  flag

Status in OpenStack Compute (Nova):
  New

Bug description:
  While setting the bridge up, if the network interface has a dynamic
  address, the 'dynamic' flag will be displayed in the ip addr show
  command:

  [fedora@dev1 devstack]$ ip addr show dev eth0 scope global
  2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
  link/ether 52:54:00:00:00:01 brd ff:ff:ff:ff:ff:ff
  inet 192.168.122.2/24 brd 192.168.122.255 scope global dynamic eth0
 valid_lft 2225sec preferred_lft 2225sec

  When latter executing ip addr del with the IPv4 details, the
  'dynamic' flag is not accepted, causes the command to crash and leaves
  the bridge half configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251590] Re: [OSSA 2014-003] Live migration can leak root disk into ephemeral storage (CVE-2013-7130)

2014-01-30 Thread Thierry Carrez
** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251590

Title:
  [OSSA 2014-003] Live migration can leak root disk into ephemeral
  storage (CVE-2013-7130)

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  During pre-live-migration required disks are created along with their
  backing files (if they don't already exist). However, the ephemeral
  backing file is created from a glance downloaded root disk.

  # If the required ephemeral backing file is present then there's no
  issue.

  # If the required ephemeral backing file is not already present, then
  the root disk is downloaded and saved as the ephemeral backing file.
  This will result in the following situations:

  ## The disk.local transferred during live-migration will be rebased on the 
ephemeral backing file so regardless of the content, the end result will be 
identical to the source disk.local.
  ## However, if a new instance of the same flavor is spawned on this compute 
node, then it will have an ephemeral storage that exposes a root disk.

  Security concerns:

  If the migrated VM was spawned off a snapshot, now it's possible for
  any instances of the correct flavor to see the snapshot contents of
  another user via the ephemeral storage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1251590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274627] [NEW] Volume attach /detach should be blocked during some opertaions

2014-01-30 Thread Phil Day
Public bug reported:

Currently volume attach, detach, and swap check on vm_state but not
task_state.  This means that, for example, volume attach is allowed
during a reboot, rebuild, or migration.

As with other operations the check should be against a task state of
None

** Affects: nova
 Importance: Undecided
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

** Changed in: nova
Milestone: None = icehouse-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274627

Title:
  Volume attach /detach should be blocked during some opertaions

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently volume attach, detach, and swap check on vm_state but not
  task_state.  This means that, for example, volume attach is allowed
  during a reboot, rebuild, or migration.

  As with other operations the check should be against a task state of
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274638] [NEW] REST API translation not working on authentication failure

2014-01-30 Thread Luis A. Garcia
Public bug reported:

The error message returned from a REST API authentication that fails is
not getting translated in --debug mode and in normal mode.

In debug mode the expected translation is an error saying: Invalid user
/ password and in normal mode the error message is the generic The
request that you made requires authentication.

They are both failing to translate and apparently for different reasons.

** Affects: keystone
 Importance: Undecided
 Assignee: Luis A. Garcia (luisg-8)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1274638

Title:
  REST API translation not working on authentication failure

Status in OpenStack Identity (Keystone):
  Confirmed

Bug description:
  The error message returned from a REST API authentication that fails
  is not getting translated in --debug mode and in normal mode.

  In debug mode the expected translation is an error saying: Invalid
  user / password and in normal mode the error message is the generic
  The request that you made requires authentication.

  They are both failing to translate and apparently for different
  reasons.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1274638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274654] [NEW] Success message when updating default quotas is not very good

2014-01-30 Thread Justin Pomeroy
Public bug reported:

When I update the default quotas I get the message,  Success: Default
quotas updated Update Default Quotas.  Seems to me this message is a
little awkward and should probably just say Default quotas updated or
something similar.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: success_message.png
   
https://bugs.launchpad.net/bugs/1274654/+attachment/3963218/+files/success_message.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1274654

Title:
  Success message when updating default quotas is not very good

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I update the default quotas I get the message,  Success: Default
  quotas updated Update Default Quotas.  Seems to me this message is a
  little awkward and should probably just say Default quotas updated
  or something similar.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1274654/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274653] [NEW] table column too wide causes actions to be cut off

2014-01-30 Thread Cindy Lu
Public bug reported:

When the browser window size width gets small, the table doesn't resize
automatically anymore and there is no way to access the further columns
to the right.  Please seem image.

A suggestion could be that we should check that if the browser window
width is less than a certain value, the horizontal scroll should appear.

I don't know if this will be addressed in future UI enhancements or not.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: 012914 - table width.png
   
https://bugs.launchpad.net/bugs/1274653/+attachment/3963217/+files/012914%20-%20table%20width.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1274653

Title:
  table column too wide causes actions to be cut off

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the browser window size width gets small, the table doesn't
  resize automatically anymore and there is no way to access the further
  columns to the right.  Please seem image.

  A suggestion could be that we should check that if the browser window
  width is less than a certain value, the horizontal scroll should
  appear.

  I don't know if this will be addressed in future UI enhancements or
  not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1274653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274661] [NEW] Alt-shift does not change keyboard layout to Hungarian from English

2014-01-30 Thread Nagy Tamás
Public bug reported:


Alt-shift does not change keyboard layout to Hungarian from English.
Pressing these keys does nothing. Clicking on the keyboard layout icon
on the trey in Unity does nothing. It opens, EN is visible, even if I click
on it, HU will be visible again.

After a 4-5 minutes it is possible to use Alt-Shift to change to
Hungarian.

Ubuntu 13.10. 
Firefox 26.0

Unity:
Architecture: amd64
Version: 7.1.2+13.10.20131014.1-0ubuntu1


Same bug is for PCBSD 10.0 release version

Sometimes if Hungarian succeed, on many webpages (i.e jobportals)
using some special keys by AltGr will mark the full line and delete it,
because it is marked, pressing a key will delete the marked characters.
After a while it is ok again without pressing anything.

Sometimes it opens the run console at the top when using Firefox.
Waits a while and then closes it again like if it would be locked.

Looks like that a puffer takes typed characters but in some cases
it is not flushed for some seconds...

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: alt keyboard layout shift unity

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1274661

Title:
  Alt-shift does not change keyboard layout to Hungarian from English

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  Alt-shift does not change keyboard layout to Hungarian from English.
  Pressing these keys does nothing. Clicking on the keyboard layout icon
  on the trey in Unity does nothing. It opens, EN is visible, even if I click
  on it, HU will be visible again.

  After a 4-5 minutes it is possible to use Alt-Shift to change to
  Hungarian.

  Ubuntu 13.10. 
  Firefox 26.0

  Unity:
  Architecture: amd64
  Version: 7.1.2+13.10.20131014.1-0ubuntu1

  
  Same bug is for PCBSD 10.0 release version

  Sometimes if Hungarian succeed, on many webpages (i.e jobportals)
  using some special keys by AltGr will mark the full line and delete it,
  because it is marked, pressing a key will delete the marked characters.
  After a while it is ok again without pressing anything.

  Sometimes it opens the run console at the top when using Firefox.
  Waits a while and then closes it again like if it would be locked.

  Looks like that a puffer takes typed characters but in some cases
  it is not flushed for some seconds...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1274661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274638] Re: REST API translation not working on authentication failure

2014-01-30 Thread Dolph Mathews
The difference in messages is very much by design. Authentication can
fail for any number of reasons, and those reasons are suppressed outside
of debug mode.

** Changed in: keystone
   Status: In Progress = Won't Fix

** Changed in: keystone
   Status: Won't Fix = Confirmed

** Changed in: keystone
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1274638

Title:
  REST API translation not working on authentication failure

Status in OpenStack Identity (Keystone):
  Confirmed

Bug description:
  The error message returned from a REST API authentication that fails
  is not getting translated in --debug mode and in normal mode.

  In debug mode the expected translation is an error saying: Invalid
  user / password and in normal mode the error message is the generic
  The request that you made requires authentication.

  They are both failing to translate and apparently for different
  reasons.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1274638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1230076] Re: glance.tests.unit.test_store_base.TestStoreBase.test_exception_to_unicode fails

2014-01-30 Thread Nicolas Simonds
Running tests against master w/ no changes:

==
FAIL: 
glance.tests.unit.test_store_base.TestStoreBase.test_exception_to_unicode
--
_StringException: Traceback (most recent call last):
  File /home/vagrant/glance/glance/tests/unit/test_store_base.py, line 
46, in test_exception_to_unicode
self.assertEqual(ret, ' error message')
  File 
/home/vagrant/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 321, in assertEqual
self.assertThat(observed, matcher, message)
  File 
/home/vagrant/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 406, in assertThat
raise mismatch_error
MismatchError: u'\xa5 error message' != ' error message'

# tox --version
1.7.0 imported from /usr/local/lib/python2.7/dist-packages/tox/__init__.pyc
# pip --version
pip 1.5.2 from /usr/local/lib/python2.7/dist-packages (python 2.7)

** Changed in: glance
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1230076

Title:
  glance.tests.unit.test_store_base.TestStoreBase.test_exception_to_unicode
  fails

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed

Bug description:
  $ tox -e py27

  ==
  FAIL: 
glance.tests.unit.test_store_base.TestStoreBase.test_exception_to_unicode
  --
  _StringException: Traceback (most recent call last):
File /opt/stack/glance/glance/tests/unit/test_store_base.py, line 41, in 
test_exception_to_unicode
  self.assertEqual(ret, ' error message')
File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 322, in assertEqual
  self.assertThat(observed, matcher, message)
File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 417, in assertThat
  raise MismatchError(matchee, matcher, mismatch, verbose)
  MismatchError: u'\xa5 error message' != ' error message'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1230076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274715] [NEW] dearth of debug logs when LDAP user_name_attribute is incorrect

2014-01-30 Thread Matt Fischer
Public bug reported:

When I was first setting up a connection to LDAP via keystone I fought
through some configuration issues. One of the first issues is that I had
user_name_attribute incorrect so that it could not validate my specified
user on a a request like keystone user-list. Unfortunately when the
failure scenario here happens, you get no useful logging, even with
Debug and Verbose enabled. The only message available is:

2014-01-30 21:41:45.461 9499 WARNING keystone.common.wsgi [-]
Authorization failed. Could not find user, foo. from 10.33.0.17

and from the CLI:

root@test-03:~# keystone user-list
Could not find user, foo. (HTTP 401)

It's not even obvious from this that LDAP was used at all much less what the 
issue might be. I ended up adding my own logging and 
once I dumped the query that get_by_name ends up calling the issue was obvious:

((cn=foo)(objectClass=inetUser))

Since in my case cn was incorrect.

I've been digging some to see if I can add logging here without logging
every query call without too much success, although I've not had a ton
of time. If someone has a suggestion I'd be happy to work on it.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1274715

Title:
  dearth of debug logs when LDAP user_name_attribute is incorrect

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When I was first setting up a connection to LDAP via keystone I fought
  through some configuration issues. One of the first issues is that I
  had user_name_attribute incorrect so that it could not validate my
  specified user on a a request like keystone user-list. Unfortunately
  when the failure scenario here happens, you get no useful logging,
  even with Debug and Verbose enabled. The only message available is:

  2014-01-30 21:41:45.461 9499 WARNING keystone.common.wsgi [-]
  Authorization failed. Could not find user, foo. from 10.33.0.17

  and from the CLI:

  root@test-03:~# keystone user-list
  Could not find user, foo. (HTTP 401)

  It's not even obvious from this that LDAP was used at all much less what the 
issue might be. I ended up adding my own logging and 
  once I dumped the query that get_by_name ends up calling the issue was 
obvious:

  ((cn=foo)(objectClass=inetUser))

  Since in my case cn was incorrect.

  I've been digging some to see if I can add logging here without
  logging every query call without too much success, although I've not
  had a ton of time. If someone has a suggestion I'd be happy to work on
  it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1274715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274682] Re: Vlans are not cleanly deleted in SQL database leaving stale entries

2014-01-30 Thread Mark T. Voelker
This looks like a Neutron bug, not an installation issue.

** Changed in: openstack-cisco
 Assignee: (unassigned) = Kyle Mestery (mestery)

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274682

Title:
  Vlans are not cleanly deleted in SQL database leaving stale entries

Status in OpenStack Neutron (virtual network service):
  New
Status in Openstack @ Cisco:
  New

Bug description:
  Steps to see

  - Create the Tenant with subnet , 
  - instantiate a VM on the subnet which will create a VLAN using neutron 
plugin for Nexus . 
  - Now delete the subnet and the vlan on the switch . 
  - Create the subnet again , the controller will again allocate the same 
segmentation id but it will not create the Vlan again on Nexus switch 

  When looking the SQL database the previous vlan entries are still
  present in the database and the controller is not able to assign the
  vlans correctly . The workaround is to select a different range of
  segmentation ids in the plugin .

  Error : 
  2014-01-29 13:47:03.059 2562 WARNING neutron.db.agentschedulers_db [-] Fail 
scheduling network {'status': u'ACTIVE', 'subnets': 
[u'5146ec1e-ad1d-4ca2-9e1d-e9e97126ae05'], 'name': u'External-1', 'provider
  :physical_network': u'physnet1', 'admin_state_up': True, 'tenant_id': 
u'adfdcc7e64904ab1b812ad1cbbf92f1a', 'provider:network_type': u'vlan', 
'router:external': False, 'shared': False, 'id': u'd71796ca-d1
  0c-4f4d-b742-1e720ce8b94e', 'provider:segmentation_id': 504L}
  2014-01-29 13:47:03.078 2562 ERROR neutron.api.v2.resource [-] 
add_router_interface failed
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py, line 84, in 
resource
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py, line 185, in 
_handle_action
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py,
 line 439, in add_router_interface
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource raise 
cexc.SubnetNotSpecified()
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource 
SubnetNotSpecified: No subnet_id specified for router gateway.
  2014-01-29 13:47:03.078 2562 TRACE neutron.api.v2.resource 
  2014-01-29 13:51:28.220 2562 WARNING neutron.db.agentschedulers_db [-] Fail 
scheduling network {'status': u'ACTIVE', 'subnets': 
[u'e98ed533-bbc6-44b4-909a-a94992875c3d'], 'name': u'Tenant_coke', 'provide
  r:physical_network': u'physnet1', 'admin_state_up': True, 'tenant_id': 
u'adfdcc7e64904ab1b812ad1cbbf92f1a', 'provider:network_type': u'vlan', 
'router:external': False, 'shared': False, 'id': u'f1f0c30c-a
  14b-4b33-8212-5b8763dbd594', 'provider:segmentation_id': 503L}

  
  SQL Entries

  Old Stale  Entries

   
  mysql SELECT * FROM ovs_network_bindings;
  
+--+--+--+-+
  | network_id   | network_type | physical_network | 
segmentation_id |
  
+--+--+--+-+
  | 07f60f68-8482-4972-aa7c-398f4cdf6abd | vlan | physnet1 |
 500 |
  | 8b034e8d-7b6a-4198-94bd-5278d7934c78 | vlan | physnet1 |
 502 |
  
+--+--+--+-+
  2 rows in set (0.00 sec)

  mysql SELECT * FROM subnets;
  
+--+--+-+--++--+---+-++
  | tenant_id| id   | 
name| network_id   | ip_version | cidr 
| gateway_ip| enable_dhcp | shared |
  
+--+--+-+--++--+---+-++
  | adfdcc7e64904ab1b812ad1cbbf92f1a | 186b1806-2d44-420a-a48e-ea027dcae543 | 
Inet-1  | 8b034e8d-7b6a-4198-94bd-5278d7934c78 |  4 | 10.111.111.0/24  
| 10.111.111.1  |   1 |  0 |
  | adfdcc7e64904ab1b812ad1cbbf92f1a | 2fce01d6-6616-4137-a451-d63c80567929 | 
Ext-Net | 07f60f68-8482-4972-aa7c-398f4cdf6abd |  4 | 

[Yahoo-eng-team] [Bug 1274732] [NEW] Instances table doesn't get cleaned up

2014-01-30 Thread Tushar
Public bug reported:

It seems like nova doesn't do any cleanup of the instances table. If you
spin up and delete lots of VMs, this table eventually gets gigantic.

Manually it can be cleaned using: `nova-manage db archive_deleted_rows
1` but an auto-cleanup of stale entries would be useful too.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274732

Title:
  Instances table doesn't get cleaned up

Status in OpenStack Compute (Nova):
  New

Bug description:
  It seems like nova doesn't do any cleanup of the instances table. If
  you spin up and delete lots of VMs, this table eventually gets
  gigantic.

  Manually it can be cleaned using: `nova-manage db archive_deleted_rows
  1` but an auto-cleanup of stale entries would be useful too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257301] Re: Bump hacking to 0.8

2014-01-30 Thread Dolph Mathews
** Changed in: python-keystoneclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257301

Title:
  Bump hacking to 0.8

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Swift:
  In Progress

Bug description:
  Due to Bump hacking dependency is not the 0.8, some compatibility
  checks with python 3.x are not being done on gate and it is bringing
  code issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218190] Re: Use assertEqual instead of assertEquals in unitttest

2014-01-30 Thread Dolph Mathews
** Changed in: python-keystoneclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1218190

Title:
  Use assertEqual instead of assertEquals in unitttest

Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Neutron:
  Fix Committed

Bug description:
  I noticed that [keystone, python-keystoneclient, python-neutronclient]
  configure tox.ini with py33 test, however, assertEquals is deprecated
  in py3 but ok with py2, so i think it is better to change all of
  assertEquals to assertEqual

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1218190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236648] Re: __metaclass__ is incompatible for python 3

2014-01-30 Thread Dolph Mathews
** Changed in: python-keystoneclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1236648

Title:
  __metaclass__ is incompatible for python 3

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Nova:
  Fix Committed

Bug description:
  Some class uses __metaclass__ for abc.ABCMeta.
  six be used in general for python 3 compatibility.

  For example

  import abc
  import six

  
  six.add_metaclass(abc.ABCMeta)
  class FooDriver:

  @abc.abstractmethod
  def bar():
  pass

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1236648/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255876] Re: need to ignore swap files from getting into repository

2014-01-30 Thread Dolph Mathews
** Changed in: python-keystoneclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255876

Title:
  need to ignore swap files from getting into repository

Status in OpenStack Telemetry (Ceilometer):
  Invalid
Status in Heat Orchestration Templates and tools:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Won't Fix
Status in Python client library for Ceilometer:
  Fix Committed
Status in Python client library for Cinder:
  Fix Committed
Status in Python client library for Glance:
  Fix Committed
Status in Python client library for heat:
  In Progress
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Neutron:
  Fix Committed
Status in Python client library for Nova:
  Fix Committed
Status in Python client library for Swift:
  Fix Committed
Status in OpenStack Data Processing (Savanna):
  Invalid

Bug description:
  need to ignore swap files from getting into repository
  currently the implemented ignore in .gitignore is *.swp
  however vim goes beyond to generate these so to improve it could be done *.sw?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1255876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274767] [NEW] bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-br100.conf

2014-01-30 Thread Joe Gordon
Public bug reported:

bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-
br100.conf

http://logs.openstack.org/51/63551/6/gate/gate-tempest-dsvm-postgres-
full/4860441/logs/syslog.txt.gz


logstash query: message:bad DHCP host name at line 1 of 
/opt/stack/data/nova/networks/nova-br100.conf AND filename:logs/syslog.txt

Seen in the gate

Jan 30 22:38:43 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses
Jan 30 22:38:43 localhost dnsmasq[3604]: bad DHCP host name at line 1 of 
/opt/stack/data/nova/networks/nova-br100.conf
Jan 30 22:38:43 localhost dnsmasq[3604]: bad DHCP host name at line 2 of 
/opt/stack/data/nova/networks/nova-br100.conf
Jan 30 22:38:43 localhost dnsmasq-dhcp[3604]: read 
/opt/stack/data/nova/networks/nova-br100.conf
Jan 30 22:38:44 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses
Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 1 of 
/opt/stack/data/nova/networks/nova-br100.conf
Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 2 of 
/opt/stack/data/nova/networks/nova-br100.conf
Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 3 of 
/opt/stack/data/nova/networks/nova-br100.conf
Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 4 of 
/opt/stack/data/nova/networks/nova-br100.conf
Jan 30 22:38:44 localhost dnsmasq-dhcp[3604]: read 
/opt/stack/data/nova/networks/nova-br100.conf
Jan 30 22:38:44 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses

** Affects: nova
 Importance: Undecided
 Status: Confirmed


** Tags: testing

** Changed in: nova
Milestone: None = icehouse-3

** Changed in: nova
   Status: New = Confirmed

** Tags added: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274767

Title:
  bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-
  br100.conf

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  bad DHCP host name at line 1 of /opt/stack/data/nova/networks/nova-
  br100.conf

  http://logs.openstack.org/51/63551/6/gate/gate-tempest-dsvm-postgres-
  full/4860441/logs/syslog.txt.gz

  
  logstash query: message:bad DHCP host name at line 1 of 
/opt/stack/data/nova/networks/nova-br100.conf AND filename:logs/syslog.txt

  Seen in the gate

  Jan 30 22:38:43 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses
  Jan 30 22:38:43 localhost dnsmasq[3604]: bad DHCP host name at line 1 of 
/opt/stack/data/nova/networks/nova-br100.conf
  Jan 30 22:38:43 localhost dnsmasq[3604]: bad DHCP host name at line 2 of 
/opt/stack/data/nova/networks/nova-br100.conf
  Jan 30 22:38:43 localhost dnsmasq-dhcp[3604]: read 
/opt/stack/data/nova/networks/nova-br100.conf
  Jan 30 22:38:44 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses
  Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 1 of 
/opt/stack/data/nova/networks/nova-br100.conf
  Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 2 of 
/opt/stack/data/nova/networks/nova-br100.conf
  Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 3 of 
/opt/stack/data/nova/networks/nova-br100.conf
  Jan 30 22:38:44 localhost dnsmasq[3604]: bad DHCP host name at line 4 of 
/opt/stack/data/nova/networks/nova-br100.conf
  Jan 30 22:38:44 localhost dnsmasq-dhcp[3604]: read 
/opt/stack/data/nova/networks/nova-br100.conf
  Jan 30 22:38:44 localhost dnsmasq[3604]: read /etc/hosts - 8 addresses

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274772] [NEW] libvirt.txt in gate is full of error messages

2014-01-30 Thread Joe Gordon
Public bug reported:

http://logs.openstack.org/51/63551/6/gate/gate-tempest-dsvm-postgres-
full/4860441/logs/libvirtd.txt.gz is full of errors such as:


2014-01-30 22:40:04.255+: 9228: error : virNetDevGetIndex:656 : Unable to 
get index for interface vnet0: No such device

2014-01-30 22:13:14.464+: 9227: error : virExecWithHook:327 : Cannot
find 'pm-is-supported' in path: No such file or directory

** Affects: nova
 Importance: Undecided
 Status: New

** Changed in: nova
Milestone: None = icehouse-3

** Description changed:

  http://logs.openstack.org/51/63551/6/gate/gate-tempest-dsvm-postgres-
- full/4860441/logs/libvirtd.txt.gz is full of errors
+ full/4860441/logs/libvirtd.txt.gz is full of errors such as:
+ 
+ 
+ 2014-01-30 22:40:04.255+: 9228: error : virNetDevGetIndex:656 : Unable to 
get index for interface vnet0: No such device
+ 
+ 2014-01-30 22:13:14.464+: 9227: error : virExecWithHook:327 : Cannot
+ find 'pm-is-supported' in path: No such file or directory

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274772

Title:
  libvirt.txt in gate is full of error messages

Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/51/63551/6/gate/gate-tempest-dsvm-postgres-
  full/4860441/logs/libvirtd.txt.gz is full of errors such as:

  
  2014-01-30 22:40:04.255+: 9228: error : virNetDevGetIndex:656 : Unable to 
get index for interface vnet0: No such device

  2014-01-30 22:13:14.464+: 9227: error : virExecWithHook:327 :
  Cannot find 'pm-is-supported' in path: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274798] Re: nova-compute stops reporting in grenade

2014-01-30 Thread Joe Gordon
It looks like n-cpu (http://logs.openstack.org/39/70239/1/check/check-
grenade-dsvm/6f4b3bf/logs/new/screen-n-cpu.txt.gz) just stops logging
and presumably doing anything  at 2014-01-30 23:22:49.183 and then the
scheduler  starts erroring out at
http://logs.openstack.org/39/70239/1/check/check-grenade-
dsvm/6f4b3bf/logs/new/screen-n-sch.txt.gz#_2014-01-30_23_26_06_767
because of no heartbeats from the compute node.

Also swift is throwing some cryptic errors around the same time nova-
compute stops logging:

http://logs.openstack.org/39/70239/1/check/check-grenade-
dsvm/6f4b3bf/logs/syslog.txt.gz

** Also affects: swift
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274798

Title:
  nova-compute stops reporting in grenade

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Object Storage (Swift):
  New

Bug description:
  Seen in 
  
http://logs.openstack.org/39/70239/1/check/check-grenade-dsvm/6f4b3bf/logs/new/screen-n-sch.txt.gz

  jog says
  15:38  jog0 lifeless: it looks like it has happend before
  15:39  jog0 message:has not been heard from in a  while AND 
filename:logs/screen-n-sch.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274798] Re: nova-compute stops reporting in grenade

2014-01-30 Thread Joe Gordon
The swift errors are unrelated, and due to the grenade upgrade process.

** No longer affects: swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274798

Title:
  nova-compute stops reporting in grenade

Status in OpenStack Compute (Nova):
  New

Bug description:
  Seen in 
  
http://logs.openstack.org/39/70239/1/check/check-grenade-dsvm/6f4b3bf/logs/new/screen-n-sch.txt.gz

  jog says
  15:38  jog0 lifeless: it looks like it has happend before
  15:39  jog0 message:has not been heard from in a  while AND 
filename:logs/screen-n-sch.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1230235] Re: Admin user in one tenant, member in another causes inability to switch between tenants

2014-01-30 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1230235

Title:
  Admin user in one tenant, member in another causes inability to switch
  between tenants

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Currently occurring in Grizzly (python-django-horizon
  1:2013.1.2-0ubuntu1~cloud)

  If you have a user with a role of 'admin' in one tenant, but that user
  has a role of 'member' in another tenant - you can't switch to the
  tenant that requires 'admin' role.

  To recreate:

  1) Create an 'admin' tenant
  2) Create a user called 'admin' with role of 'admin' in the 'admin' tenant 

  3) Create a 'member' tenant
  4) Add the 'admin' user with role of 'member' in the 'member' tenant

  5) Log out / log back in
  6) You will see the 'member' project, and in the drop-down the 'admin' 
project.  Try switching to the 'admin' project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1230235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274823] [NEW] Vague error when no metrics in Usage panel

2014-01-30 Thread Rob Raymond
Public bug reported:

When you have ceilometer configured but there are no metrics, you get the error 
message:
Error: An error occurred. Please try again later.

It would be better to give the user a warning that there are no metrics.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Raymond (rob-raymond)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Rob Raymond (rob-raymond)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1274823

Title:
  Vague error when no metrics in Usage panel

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When you have ceilometer configured but there are no metrics, you get the 
error message:
  Error: An error occurred. Please try again later.

  It would be better to give the user a warning that there are no
  metrics.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1274823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp