[Yahoo-eng-team] [Bug 1352405] [NEW] Storage on hypervisors page incorrect for shared storage

2014-08-04 Thread Adam Huffman
Public bug reported:

The storage total and storage used shown on the Hypervisors page does not take 
account of the shared storage case.
We have shared storage for /var/lib/nova/instances (currently using Gluster) 
and Horizon computes a simple addition of the usage across the compute nodes. 
The total and used figures are incorrect.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1352405

Title:
  Storage on hypervisors page incorrect for shared storage

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The storage total and storage used shown on the Hypervisors page does not 
take account of the shared storage case.
  We have shared storage for /var/lib/nova/instances (currently using Gluster) 
and Horizon computes a simple addition of the usage across the compute nodes. 
The total and used figures are incorrect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1352405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343271] [NEW] Error running keystone-manage token_flush

2014-07-17 Thread Adam Huffman
Public bug reported:

I have 1.3 million rows in my keystone.token table, so I'm trying to
trim this with the command keystone-manage token_flush. However, this
command fails with the following message:

keystone-manage token_flush
/usr/lib/python2.6/site-packages/keystone/openstack/common/db/sqlalchemy/session.py:424:
 DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
  m = re.match(operational_error.message)

The real error is the following:

2014-07-17 13:16:19.830 8982 TRACE keystone Traceback (most recent call last):
2014-07-17 13:16:19.830 8982 TRACE keystone   File /usr/bin/keystone-manage, 
line 51, in module
2014-07-17 13:16:19.830 8982 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib/python2.6/site-packages/keystone/cli.py, line 190, in main
2014-07-17 13:16:19.830 8982 TRACE keystone CONF.command.cmd_class.main()
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib/python2.6/site-packages/keystone/cli.py, line 154, in main
2014-07-17 13:16:19.830 8982 TRACE keystone 
token_manager.driver.flush_expired_tokens()
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib/python2.6/site-packages/keystone/token/backends/sql.py, line 229, in 
flush_expired_tokens
2014-07-17 13:16:19.830 8982 TRACE keystone 
query.delete(synchronize_session=False)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/query.py, line 2603, in 
delete
2014-07-17 13:16:19.830 8982 TRACE keystone delete_op.exec_()
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/persistence.py, line 816, 
in exec_
2014-07-17 13:16:19.830 8982 TRACE keystone self._do_exec()
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/persistence.py, line 942, 
in _do_exec
2014-07-17 13:16:19.830 8982 TRACE keystone params=self.query._params)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib/python2.6/site-packages/keystone/openstack/common/db/sqlalchemy/session.py,
 line 439, in _wrap
2014-07-17 13:16:19.830 8982 TRACE keystone return f(self, *args, **kwargs)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib/python2.6/site-packages/keystone/openstack/common/db/sqlalchemy/session.py,
 line 709, in execute
2014-07-17 13:16:19.830 8982 TRACE keystone return super(Session, 
self).execute(*args, **kwargs)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py, line 934, in 
execute
2014-07-17 13:16:19.830 8982 TRACE keystone clause, params or {})
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py, line 662, in 
execute
2014-07-17 13:16:19.830 8982 TRACE keystone params)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py, line 761, in 
_execute_clauseelement
2014-07-17 13:16:19.830 8982 TRACE keystone compiled_sql, distilled_params
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py, line 874, in 
_execute_context
2014-07-17 13:16:19.830 8982 TRACE keystone context)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py, line 1024, in 
_handle_dbapi_exception
2014-07-17 13:16:19.830 8982 TRACE keystone exc_info
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/util/compat.py, line 196, in 
raise_from_cause
2014-07-17 13:16:19.830 8982 TRACE keystone reraise(type(exception), 
exception, tb=exc_tb)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py, line 867, in 
_execute_context
2014-07-17 13:16:19.830 8982 TRACE keystone context)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/default.py, line 324, in 
do_execute
2014-07-17 13:16:19.830 8982 TRACE keystone cursor.execute(statement, 
parameters)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py, line 173, in execute
2014-07-17 13:16:19.830 8982 TRACE keystone self.errorhandler(self, exc, 
value)
2014-07-17 13:16:19.830 8982 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
2014-07-17 13:16:19.830 8982 TRACE keystone raise errorclass, errorvalue
2014-07-17 13:16:19.830 8982 TRACE keystone OperationalError: 
(OperationalError) (1206, 'The total number of locks exceeds the lock table 
size') 'DELETE FROM token WHERE token.expires  %s' (datetime.datetime(2014, 7, 
17, 12, 10, 14, 894903),)
2014-07-17 13:16:19.830 8982 TRACE keystone 

It would be helpful if the client 

[Yahoo-eng-team] [Bug 1319091] Re: Self-service password option missing

2014-05-23 Thread Adam Huffman
Yes, I am using the Red Hat theme. If I use the default theme, the self-
service password option does appear.

Thanks for tracking this down.


** Changed in: horizon
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1319091

Title:
  Self-service password option missing

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Having upgraded my installation from Havana to Icehouse, I was hoping
  the ability for users to change their passwords in Horizon would be
  working. However, the option is missing.

  I'm using the RDO Icehouse packages on CentOS 6.5.

  openstack-dashboard-2014.1-1.el6.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1319091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320224] [NEW] Unable to set a new flavor to be public

2014-05-16 Thread Adam Huffman
Public bug reported:

When creating a new flavor there's no option to set whether it is public
or not.

This is with the RDO Icehouse packages:

openstack-dashboard-2014.1-1.el6.noarch

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1320224

Title:
  Unable to set a new flavor to be public

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a new flavor there's no option to set whether it is
  public or not.

  This is with the RDO Icehouse packages:

  openstack-dashboard-2014.1-1.el6.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1320224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319091] [NEW] Self-service password option missing

2014-05-13 Thread Adam Huffman
Public bug reported:

Having upgraded my installation from Havana to Icehouse, I was hoping
the ability for users to change their passwords in Horizon would be
working. However, the option is missing.

I'm using the RDO Icehouse packages on CentOS 6.5.

openstack-dashboard-2014.1-1.el6.noarch

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1319091

Title:
  Self-service password option missing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Having upgraded my installation from Havana to Icehouse, I was hoping
  the ability for users to change their passwords in Horizon would be
  working. However, the option is missing.

  I'm using the RDO Icehouse packages on CentOS 6.5.

  openstack-dashboard-2014.1-1.el6.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1319091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317174] [NEW] novnc console failure after Icehouse upgrade

2014-05-07 Thread Adam Huffman
Public bug reported:

After upgrading my Havana installation to Icehouse, VNC console logins
are no longer working (1006 error).

The version in use is:

openstack-nova-novncproxy-2014.1-2.el6.noarch

from RDO.

This is the full error in the logs:

2014-05-07 17:12:58.003 13074 AUDIT nova.consoleauth.manager 
[req-684f9e8d-3c0a-4647-aa66-44f0bb35c4df None None] Checking Token: 
dbbd1b9b-002f-46b6-bf79-0d90e92c034e, True
2014-05-07 17:12:58.112 13074 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: tuple index out of range
Traceback (most recent call last):

  File /usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 133, in _dispatch_and_reply
incoming.message))

  File /usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 176, in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)

  File /usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 122, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)

  File /usr/lib/python2.6/site-packages/oslo/messaging/rpc/server.py, line 
139, in inner
return func(*args, **kwargs)

  File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 390, in 
decorated_function
args = (_load_instance(args[0]),) + args[1:]

IndexError: tuple index out of range
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 122, 
in _do_dispatch
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/consoleauth/manager.py, line 117, in 
check_token
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher if 
self._validate_token(context, token):
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/consoleauth/manager.py, line 108, in 
_validate_token
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
token['console_type'])
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/compute/rpcapi.py, line 506, in 
validate_console_port
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
console_type=console_type)
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py, line 150, in 
call
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/transport.py, line 90, in 
_send
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
412, in send
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
405, in _send
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher raise 
result
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher IndexError: 
tuple index out of range
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch
2014-05-07 17:12:58.112 13074 TRACE oslo.messaging.rpc.dispatcher 

[Yahoo-eng-team] [Bug 1274234] [NEW] Quota error message should be more descriptive

2014-01-29 Thread Adam Huffman
Public bug reported:

When I tried to lower a project's quota of cores to below the number
currently in use, instead of the useful CLI error message, the balloon
that appears says:

Error: Modified project information and members, but unable to modify
project quotas.

It would be much clearer if it said something similar to the CLI
message:

ERROR: Quota value 140 for cores are greater than already used and
reserved 146

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1274234

Title:
  Quota error message should be more descriptive

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I tried to lower a project's quota of cores to below the number
  currently in use, instead of the useful CLI error message, the balloon
  that appears says:

  Error: Modified project information and members, but unable to modify
  project quotas.

  It would be much clearer if it said something similar to the CLI
  message:

  ERROR: Quota value 140 for cores are greater than already used and
  reserved 146

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1274234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp