[Yahoo-eng-team] [Bug 1546778] [NEW] libvirt: resize with deleted backing image fails

2016-02-17 Thread Chris St. Pierre
Public bug reported:

Once the glance image from which an instance was spawned is deleted,
resizes of that image fail if they would take place across more than one
compute node. Migration and live block migration both succeed.

Resize fails, I believe, because 'qemu-img resize' is called
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7218-L7221)
before the backing image has been transferred from the source compute
node
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7230-L7233).

Replication requires two compute nodes. To replicate:

1. Boot an instance from an image or snapshot.
2. Delete the image from Glance.
3. Resize the instance. It will fail with an error similar to:

Stderr: u"qemu-img: Could not open '/var/lib/nova/instances/f77f1c5c-
71f7-4645-afa1-dd30bacef874/disk': Could not open backing file: Could
not open
'/var/lib/nova/instances/_base/ca94b18d94077894f4ccbaafb1881a90225f1224':
No such file or directory\n"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1546778

Title:
  libvirt: resize with deleted backing image fails

Status in OpenStack Compute (nova):
  New

Bug description:
  Once the glance image from which an instance was spawned is deleted,
  resizes of that image fail if they would take place across more than
  one compute node. Migration and live block migration both succeed.

  Resize fails, I believe, because 'qemu-img resize' is called
  
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7218-L7221)
  before the backing image has been transferred from the source compute
  node
  
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L7230-L7233).

  Replication requires two compute nodes. To replicate:

  1. Boot an instance from an image or snapshot.
  2. Delete the image from Glance.
  3. Resize the instance. It will fail with an error similar to:

  Stderr: u"qemu-img: Could not open '/var/lib/nova/instances/f77f1c5c-
  71f7-4645-afa1-dd30bacef874/disk': Could not open backing file: Could
  not open
  '/var/lib/nova/instances/_base/ca94b18d94077894f4ccbaafb1881a90225f1224':
  No such file or directory\n"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1546778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480441] [NEW] Live migration doesn't retry on migration pre-check failure

2015-07-31 Thread Chris St. Pierre
Public bug reported:

When live migrating an instance, it is supposed to retry some
(configurable) number of times. It only retries if the host
compatibility and migration pre-checks raise nova.exception.Invalid,
though:

https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L167-L174

If, for instance, a destination hypervisor has run out of disk space it
will not raise an Invalid subclass, but rather MigrationPreCheckError,
which causes the retry loop to short-circuit. Nova should instead retry
as long as either Invalid or MigrationPreCheckError is raised.

This can be tricky to reproduce because it only occurs if a host raises
MigrationPreCheckError before a valid host is found, so it's dependent
upon the order in which the scheduler supplies possible destinations to
the conductor. In theory, though, it can be reproduced by bringing up a
number of hypervisors, exhausting the disk on one -- ideally the one
that the scheduler will return first -- and then attempting a live
migration. It will fail with something like:

$ nova live-migration  --block-migrate stpierre-test-1 ERROR
(BadRequest): Migration pre-check error: Unable to migrate f44296dd-
ffa6-4ec0-8256-c311d025d46c: Disk of instance is too large(available on
destination host:-38654705664 < need:1073741824) (HTTP 400) (Request-ID:
req-9951691a-c63c-4888-bec5-30a072dfe727)

Even when there are valid hosts to migrate to.

** Affects: nova
 Importance: Undecided
 Assignee: Chris St. Pierre (stpierre)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480441

Title:
  Live migration doesn't retry on migration pre-check failure

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When live migrating an instance, it is supposed to retry some
  (configurable) number of times. It only retries if the host
  compatibility and migration pre-checks raise nova.exception.Invalid,
  though:

  
https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L167-L174

  If, for instance, a destination hypervisor has run out of disk space
  it will not raise an Invalid subclass, but rather
  MigrationPreCheckError, which causes the retry loop to short-circuit.
  Nova should instead retry as long as either Invalid or
  MigrationPreCheckError is raised.

  This can be tricky to reproduce because it only occurs if a host
  raises MigrationPreCheckError before a valid host is found, so it's
  dependent upon the order in which the scheduler supplies possible
  destinations to the conductor. In theory, though, it can be reproduced
  by bringing up a number of hypervisors, exhausting the disk on one --
  ideally the one that the scheduler will return first -- and then
  attempting a live migration. It will fail with something like:

  $ nova live-migration  --block-migrate stpierre-test-1 ERROR
  (BadRequest): Migration pre-check error: Unable to migrate f44296dd-
  ffa6-4ec0-8256-c311d025d46c: Disk of instance is too large(available
  on destination host:-38654705664 < need:1073741824) (HTTP 400)
  (Request-ID: req-9951691a-c63c-4888-bec5-30a072dfe727)

  Even when there are valid hosts to migrate to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241587] Re: Can not delete deleted tenant's default security group

2015-05-07 Thread Chris St. Pierre
** Changed in: nova
   Status: Invalid => Confirmed

** Changed in: nova
 Assignee: Jay Lau (jay-lau-513) => Chris St. Pierre (stpierre)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241587

Title:
  Can not delete deleted tenant's default security group

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  $ keystone tenant-create --name foo
  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |  |
  |   enabled   |   True   |
  |  id | 7149cdf591364e17a15e30229f2e023e |
  | name|   foo|
  +-+--+

  $ keystone user-create --name foo --pass foo --tenant foo
  +--+--+
  | Property |  Value   |
  +--+--+
  |  email   |  |
  | enabled  |   True   |
  |id| e5a5cd548ab446d5b787e6b37415707d |
  |   name   |   foo|
  | tenantId | 7149cdf591364e17a15e30229f2e023e |
  +--+--+

  $nova --os-username foo --os-password foo --os-tenant-id 
7149cdf591364e17a15e30229f2e023e  secgroup-list
  +-+-+-+
  | Id  | Name| Description |
  +-+-+-+
  | 111 | default | default |
  +-+-+-+

  
  ### AS ADMIN ###
  $ keystone user-delete foo
  $ keystone tenant-delete foo
  $ nova secgroup-delete 111
  ERROR: Unable to delete system group 'default' (HTTP 400) (Request-ID: 
req-9f62f3fe-1cd7-46dc-801c-335900b6f903)

  As admin when the tenant does not exists I should be able to delete
  the security group (may be with an additional force argument)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427929] [NEW] Purge dead file-backed scrubber queue code

2015-03-03 Thread Chris St. Pierre
Public bug reported:

The image location status blueprint
(https://blueprints.launchpad.net/glance/+spec/image-location-status)
removed the possibility to run the glance-scrubber with a file-backed
queue, but didn't actually remove the code or configuration for it. We
should clean up the code base and purge that dead code.

More details: http://lists.openstack.org/pipermail/openstack-
dev/2015-February/056652.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1427929

Title:
  Purge dead file-backed scrubber queue code

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The image location status blueprint
  (https://blueprints.launchpad.net/glance/+spec/image-location-status)
  removed the possibility to run the glance-scrubber with a file-backed
  queue, but didn't actually remove the code or configuration for it. We
  should clean up the code base and purge that dead code.

  More details: http://lists.openstack.org/pipermail/openstack-
  dev/2015-February/056652.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1427929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427838] [NEW] Blacklist os-api extensions

2015-03-03 Thread Chris St. Pierre
Public bug reported:

Currently os-api extensions can only be disabled by choosing which ones
to enable. E.g.:

osapi_compute_extension = 
nova.api.openstack.compute.contrib.select_extensions
osapi_compute_ext_list = 

With nearly a hundred extensions, though, disabling a single extension
is absurdly onerous.

It should be possible to blacklist extensions as well, so that an
administrator doesn't need to list all but one (or all but a handful) of
extensions just to disable an extension.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1427838

Title:
  Blacklist os-api extensions

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently os-api extensions can only be disabled by choosing which
  ones to enable. E.g.:

  osapi_compute_extension = 
nova.api.openstack.compute.contrib.select_extensions
  osapi_compute_ext_list = 

  With nearly a hundred extensions, though, disabling a single extension
  is absurdly onerous.

  It should be possible to blacklist extensions as well, so that an
  administrator doesn't need to list all but one (or all but a handful)
  of extensions just to disable an extension.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1427838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421017] [NEW] Flavor/image size checks are insufficient and untested

2015-02-11 Thread Chris St. Pierre
Public bug reported:

When launching an instance, Horizon does some checks to ensure that the
flavor the user has chosen is large enough to support the image they
have chosen. Unfortunately:

1. These checks are broken. For disk size, they only check the
`min_disk` property of the image, not the image size or virtual size; as
a result, the image disk size is only checked in practice if the user
has bothered to set the `min_disk` property.

2. The unit tests for the checks are broken. They modify local versions
of the glance image data, but then run the tests by passing image IDs,
which means that the original unmodified test images are used. The tests
happen to pass, accidentally, but they don't test what we think they're
testing.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1421017

Title:
  Flavor/image size checks are insufficient and untested

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When launching an instance, Horizon does some checks to ensure that
  the flavor the user has chosen is large enough to support the image
  they have chosen. Unfortunately:

  1. These checks are broken. For disk size, they only check the
  `min_disk` property of the image, not the image size or virtual size;
  as a result, the image disk size is only checked in practice if the
  user has bothered to set the `min_disk` property.

  2. The unit tests for the checks are broken. They modify local
  versions of the glance image data, but then run the tests by passing
  image IDs, which means that the original unmodified test images are
  used. The tests happen to pass, accidentally, but they don't test what
  we think they're testing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1421017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420944] [NEW] Display image virtual size

2015-02-11 Thread Chris St. Pierre
Public bug reported:

In Icehouse, Glance added a new field, 'virtual_size', to hold the
virtual size (as opposed to on-disk size) of sparsed or compressed
images: https://blueprints.launchpad.net/glance/+spec/split-image-size

In most cases, this is far more valuable to know than the on-disk size
of an image; for instance, the official Fedora 21 QCOW2 image is only
150 Mb on disk, but is a 3 Gb image. But Horizon displays the on-disk
size in both the image detail view, and in the launch instance modal.
The latter is particularly bad, since it can lead to confusion about
whether an image can be launched on a flavor of a given size.
(Displaying the on-disk size leads one to believe that the Fedora 21
image could be launched on an m1.tiny, for instance -- 150 Mb < 1 Gb,
after all -- but it can't, because the image virtual size is greater
than 1 Gb.)

When an image has a virtual size, it should be displayed in the image
detail and in the launch instance modal.

** Affects: horizon
 Importance: Undecided
     Assignee: Chris St. Pierre (stpierre)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1420944

Title:
  Display image virtual size

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In Icehouse, Glance added a new field, 'virtual_size', to hold the
  virtual size (as opposed to on-disk size) of sparsed or compressed
  images: https://blueprints.launchpad.net/glance/+spec/split-image-size

  In most cases, this is far more valuable to know than the on-disk size
  of an image; for instance, the official Fedora 21 QCOW2 image is only
  150 Mb on disk, but is a 3 Gb image. But Horizon displays the on-disk
  size in both the image detail view, and in the launch instance modal.
  The latter is particularly bad, since it can lead to confusion about
  whether an image can be launched on a flavor of a given size.
  (Displaying the on-disk size leads one to believe that the Fedora 21
  image could be launched on an m1.tiny, for instance -- 150 Mb < 1 Gb,
  after all -- but it can't, because the image virtual size is greater
  than 1 Gb.)

  When an image has a virtual size, it should be displayed in the image
  detail and in the launch instance modal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1420944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398999] [NEW] Block migrate with attached volumes copies volumes to themselves

2014-12-03 Thread Chris St. Pierre
Public bug reported:

When an instance with attached Cinder volumes is block migrated, the
Cinder volumes are block migrated along with it. If they exist on shared
storage, then they end up being copied, over the network, from
themselves to themselves. At a minimum, this is horribly slow and de-
sparses a sparse volume; at worst, this could cause massive data
corruption.

More details at http://lists.openstack.org/pipermail/openstack-
dev/2014-June/038152.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398999

Title:
  Block migrate with attached volumes copies volumes to themselves

Status in OpenStack Compute (Nova):
  New

Bug description:
  When an instance with attached Cinder volumes is block migrated, the
  Cinder volumes are block migrated along with it. If they exist on
  shared storage, then they end up being copied, over the network, from
  themselves to themselves. At a minimum, this is horribly slow and de-
  sparses a sparse volume; at worst, this could cause massive data
  corruption.

  More details at http://lists.openstack.org/pipermail/openstack-
  dev/2014-June/038152.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378514] [NEW] Allow setting max downtime for libvirt live migrations

2014-10-07 Thread Chris St. Pierre
Public bug reported:

As of libvirt 1.2.9, the maximum downtime for a live migration is
tunable during a migration, so it doesn't require any threading
foolishness. We should make this configurable in nova.conf so that large
instances can be migrated across relatively smaller network pipes.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378514

Title:
  Allow setting max downtime for libvirt live migrations

Status in OpenStack Compute (Nova):
  New

Bug description:
  As of libvirt 1.2.9, the maximum downtime for a live migration is
  tunable during a migration, so it doesn't require any threading
  foolishness. We should make this configurable in nova.conf so that
  large instances can be migrated across relatively smaller network
  pipes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369705] [NEW] Specify exact CPU topology

2014-09-15 Thread Chris St. Pierre
Public bug reported:

The work done in https://blueprints.launchpad.net/nova/+spec/virt-
driver-vcpu-topology to allow setting CPU topology on guests implements
what might be called a "best attempt" algorithm -- it *tries* to give
you the topology requested, but does not do so reliably. Absolute upper
bounds can be set, but it's not possible to specify an *exact* topology.

This seems to violate the principle of least surprise. As such, there
should be a spec key that forces the topology specs to be *exact*, and
produces an error if the requested topology cannot be satisfied.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369705

Title:
  Specify exact CPU topology

Status in OpenStack Compute (Nova):
  New

Bug description:
  The work done in https://blueprints.launchpad.net/nova/+spec/virt-
  driver-vcpu-topology to allow setting CPU topology on guests
  implements what might be called a "best attempt" algorithm -- it
  *tries* to give you the topology requested, but does not do so
  reliably. Absolute upper bounds can be set, but it's not possible to
  specify an *exact* topology.

  This seems to violate the principle of least surprise. As such, there
  should be a spec key that forces the topology specs to be *exact*, and
  produces an error if the requested topology cannot be satisfied.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366778] [NEW] Flavor names unnecessarily restrictive

2014-09-08 Thread Chris St. Pierre
Public bug reported:

Flavor names are restricted to "word" characters, period, dash, and
space, for no apparent reason. Flavor names should allow all printable
characters to obviate this unnecessary restriction. If nothing else,
this will make it easier to migrate from pre-Grizzly (I know, ancient
history) where the allowable flavor name regex was less restrictive.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366778

Title:
  Flavor names unnecessarily restrictive

Status in OpenStack Compute (Nova):
  New

Bug description:
  Flavor names are restricted to "word" characters, period, dash, and
  space, for no apparent reason. Flavor names should allow all printable
  characters to obviate this unnecessary restriction. If nothing else,
  this will make it easier to migrate from pre-Grizzly (I know, ancient
  history) where the allowable flavor name regex was less restrictive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366139] [NEW] Metadata cache time should be configurable

2014-09-05 Thread Chris St. Pierre
Public bug reported:

The nova metadata request handler uses an in-memory cache of 15 seconds.
Under very heavy usage of the metadata service, this can drastically
limit the cache hit rate, since it expires so quickly.

Adding the ability to control the cache timeout has, in our tests,
increased the average cache hit rate from around 20% to 80% or better
with approximately a thousand metadata calls per minute.

** Affects: nova
 Importance: Undecided
 Assignee: Chris St. Pierre (stpierre)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366139

Title:
  Metadata cache time should be configurable

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The nova metadata request handler uses an in-memory cache of 15
  seconds. Under very heavy usage of the metadata service, this can
  drastically limit the cache hit rate, since it expires so quickly.

  Adding the ability to control the cache timeout has, in our tests,
  increased the average cache hit rate from around 20% to 80% or better
  with approximately a thousand metadata calls per minute.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330461] [NEW] Project list is not sorted usefully

2014-06-16 Thread Chris St. Pierre
Public bug reported:

The list of projects in both the current project selector and in the
project admin table is sorted by ID instead of by name.

** Affects: horizon
 Importance: Undecided
 Assignee: Chris St. Pierre (stpierre)
 Status: In Progress

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
 Assignee: (unassigned) => Chris St. Pierre (stpierre)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1330461

Title:
  Project list is not sorted usefully

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The list of projects in both the current project selector and in the
  project admin table is sorted by ID instead of by name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1330461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329954] [NEW] Natural sort instance names

2014-06-13 Thread Chris St. Pierre
Public bug reported:

Instance names in most tables are sorted numerically, not naturally, so
you get things like:

inst1
inst10
inst2
inst3
...

** Affects: horizon
 Importance: Undecided
 Assignee: Chris St. Pierre (stpierre)
 Status: In Progress

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
 Assignee: (unassigned) => Chris St. Pierre (stpierre)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1329954

Title:
  Natural sort instance names

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Instance names in most tables are sorted numerically, not naturally,
  so you get things like:

  inst1
  inst10
  inst2
  inst3
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1329954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329949] [NEW] Snapshots are unsorted when launching from snapshot

2014-06-13 Thread Chris St. Pierre
Public bug reported:

When launching from a snapshot, the snapshots are in arbitrary, unsorted
order.

** Affects: horizon
 Importance: Undecided
 Assignee: Chris St. Pierre (stpierre)
 Status: In Progress

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
 Assignee: (unassigned) => Chris St. Pierre (stpierre)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1329949

Title:
  Snapshots are unsorted when launching from snapshot

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When launching from a snapshot, the snapshots are in arbitrary,
  unsorted order.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1329949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329888] [NEW] Unhelpful error when modifying security group fails

2014-06-13 Thread Chris St. Pierre
Public bug reported:

When modifying instance security groups, Horizon catches all exceptions
and reraises a simple, dumb Exception with the message:

Failed to modify %d instance security groups.

That's useless enough, but since it reraises Exception, it has no hope
of being caught further downstream and reported meaningfully to the
user.

To see this in action, attempt to modify the security groups of a
suspended instance.  Nova returns a decent error to Horizon, but Horizon
returns a useless error to the user.

** Affects: horizon
 Importance: Undecided
 Assignee: Chris St. Pierre (stpierre)
 Status: In Progress

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
 Assignee: (unassigned) => Chris St. Pierre (stpierre)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1329888

Title:
  Unhelpful error when modifying security group fails

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When modifying instance security groups, Horizon catches all
  exceptions and reraises a simple, dumb Exception with the message:

  Failed to modify %d instance security groups.

  That's useless enough, but since it reraises Exception, it has no hope
  of being caught further downstream and reported meaningfully to the
  user.

  To see this in action, attempt to modify the security groups of a
  suspended instance.  Nova returns a decent error to Horizon, but
  Horizon returns a useless error to the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1329888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp