[Yahoo-eng-team] [Bug 1428553] [NEW] migration and live migration fails with image-type=rbd

2015-03-05 Thread Yogev Rabl
Public bug reported:

Description of problem:
The migration and live migration of instances fail when Nova is set to work 
with RBD as a back end for the instances disks. 
When attempting to migrate an instance from one host to another an error prompt:

Error: Failed to launch instance osp5: Please try again later [Error:
Unexpected error while running command. Command: ssh host address
mkdir -p /var/lib/nova/instances/98cc014a-0d6d-48bc-9d76-4fe361b67f3b
Exit code: 1 Stdout: u'This account is currently not available.\n'
Stderr: u''].

The log show: http://pastebin.test.redhat.com/267337

when attempting to run live migration this is the output:
http://pastebin.test.redhat.com/267340

There's a work around, to change the nova user settings on all the
compute nodes, on the /etc/passwd file from sbin/nologin to bin/bash and
run the command. I wouldn't recommend it, it creates a security breach
IMO.

Version-Release number of selected component (if applicable):
openstack-nova-api-2014.2.2-2.el7ost.noarch
python-nova-2014.2.2-2.el7ost.noarch
openstack-nova-compute-2014.2.2-2.el7ost.noarch
openstack-nova-common-2014.2.2-2.el7ost.noarch
openstack-nova-scheduler-2014.2.2-2.el7ost.noarch
python-novaclient-2.20.0-1.el7ost.noarch
openstack-nova-conductor-2014.2.2-2.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Set the nova to work with RBD as the back end of the instances disks, 
according to the Ceph documentation
2. Launch an instance
3. migrate the instance to a different host 

Actual results:
The migration fails and the instance status moves to error.

Expected results:
the instance migrates to the other host

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1428553

Title:
  migration and live migration fails with image-type=rbd

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  The migration and live migration of instances fail when Nova is set to work 
with RBD as a back end for the instances disks. 
  When attempting to migrate an instance from one host to another an error 
prompt:

  Error: Failed to launch instance osp5: Please try again later
  [Error: Unexpected error while running command. Command: ssh host
  address mkdir -p /var/lib/nova/instances/98cc014a-0d6d-48bc-
  9d76-4fe361b67f3b Exit code: 1 Stdout: u'This account is currently not
  available.\n' Stderr: u''].

  The log show: http://pastebin.test.redhat.com/267337

  when attempting to run live migration this is the output:
  http://pastebin.test.redhat.com/267340

  There's a work around, to change the nova user settings on all the
  compute nodes, on the /etc/passwd file from sbin/nologin to bin/bash
  and run the command. I wouldn't recommend it, it creates a security
  breach IMO.

  Version-Release number of selected component (if applicable):
  openstack-nova-api-2014.2.2-2.el7ost.noarch
  python-nova-2014.2.2-2.el7ost.noarch
  openstack-nova-compute-2014.2.2-2.el7ost.noarch
  openstack-nova-common-2014.2.2-2.el7ost.noarch
  openstack-nova-scheduler-2014.2.2-2.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  openstack-nova-conductor-2014.2.2-2.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Set the nova to work with RBD as the back end of the instances disks, 
according to the Ceph documentation
  2. Launch an instance
  3. migrate the instance to a different host 

  Actual results:
  The migration fails and the instance status moves to error.

  Expected results:
  the instance migrates to the other host

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1428553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422333] [NEW] instance resize fail when changing between flavor with ephemeral disk to a flavor without ephemeral disk

2015-02-16 Thread Yogev Rabl
Public bug reported:

Description of problem:

The resize process fails and move the instance to a an 'error' state. 
The instance was created with the flavor: 
m2.small  | 2048  | 10   | 0 |  | 1 | 1.0 | True
and was resized to: 
m3.small  | 2048  | 10   | 10| 2048 | 1 | 1.0 | True

the Horizon error message:

Error: Failed to launch instance cirros: Please try again later
[Error: Unexpected error while running command. Command: ssh compute
node IP mkdir -p
/var/lib/nova/instances/b54a62ea-b739-4b44-a394-a92a89dfa759 Exit code:
255 Stdout: u'' Stderr: u'Host key verification failed.\r\n'].

Version-Release number of selected component (if applicable):
openstack-nova-console-2014.2.2-2.el7ost.noarch
openstack-nova-novncproxy-2014.2.2-2.el7ost.noarch
openstack-nova-common-2014.2.2-2.el7ost.noarch
openstack-nova-compute-2014.2.2-2.el7ost.noarch
openstack-nova-cert-2014.2.2-2.el7ost.noarch
python-nova-2014.2.2-2.el7ost.noarch
openstack-nova-scheduler-2014.2.2-2.el7ost.noarch
python-novaclient-2.20.0-1.el7ost.noarch
openstack-nova-api-2014.2.2-2.el7ost.noarch
openstack-nova-conductor-2014.2.2-2.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Launch an instance with the small flavor
2. create a flavor with ephemeral disk
3. resize the instance to the new flavor

Actual results:
The resize fail. the instance move to error state

Expected results:
the instance should be resized to the new flavor

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1422333

Title:
  instance resize fail when changing between flavor with ephemeral disk
  to a flavor without ephemeral disk

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:

  The resize process fails and move the instance to a an 'error' state. 
  The instance was created with the flavor: 
  m2.small  | 2048  | 10   | 0 |  | 1 | 1.0 | True
  and was resized to: 
  m3.small  | 2048  | 10   | 10| 2048 | 1 | 1.0 | True

  the Horizon error message:

  Error: Failed to launch instance cirros: Please try again later
  [Error: Unexpected error while running command. Command: ssh compute
  node IP mkdir -p
  /var/lib/nova/instances/b54a62ea-b739-4b44-a394-a92a89dfa759 Exit
  code: 255 Stdout: u'' Stderr: u'Host key verification failed.\r\n'].

  Version-Release number of selected component (if applicable):
  openstack-nova-console-2014.2.2-2.el7ost.noarch
  openstack-nova-novncproxy-2014.2.2-2.el7ost.noarch
  openstack-nova-common-2014.2.2-2.el7ost.noarch
  openstack-nova-compute-2014.2.2-2.el7ost.noarch
  openstack-nova-cert-2014.2.2-2.el7ost.noarch
  python-nova-2014.2.2-2.el7ost.noarch
  openstack-nova-scheduler-2014.2.2-2.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  openstack-nova-api-2014.2.2-2.el7ost.noarch
  openstack-nova-conductor-2014.2.2-2.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Launch an instance with the small flavor
  2. create a flavor with ephemeral disk
  3. resize the instance to the new flavor

  Actual results:
  The resize fail. the instance move to error state

  Expected results:
  the instance should be resized to the new flavor

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1422333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420274] [NEW] The ephemeral disk and the swap disk locally in the nova-compute node when image_type=rbd

2015-02-10 Thread Yogev Rabl
Public bug reported:

Description of problem:
The Nova saves templates of the ephemeral and swap disks locally in the compute 
nodes, in /var/lib/nova/instances/_base directory 

Version-Release number of selected component (if applicable):
openstack-nova-compute-2014.2.1-14.el7ost.noarch
openstack-nova-api-2014.2.1-14.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Set Nova to work with Ceph as the back end of the compute nodes
2. Create a flavor that has ephemeral and swap disks 
3. Launch an instance 

Actual results:
Templates of the ephemeral and swap disks are been saved in the 
/var/lib/nova/instances/_base directory 

Expected results:
These disks should be saved in the Ceph storage

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420274

Title:
  The ephemeral disk and the swap disk locally in the nova-compute node
  when image_type=rbd

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  The Nova saves templates of the ephemeral and swap disks locally in the 
compute nodes, in /var/lib/nova/instances/_base directory 

  Version-Release number of selected component (if applicable):
  openstack-nova-compute-2014.2.1-14.el7ost.noarch
  openstack-nova-api-2014.2.1-14.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Set Nova to work with Ceph as the back end of the compute nodes
  2. Create a flavor that has ephemeral and swap disks 
  3. Launch an instance 

  Actual results:
  Templates of the ephemeral and swap disks are been saved in the 
/var/lib/nova/instances/_base directory 

  Expected results:
  These disks should be saved in the Ceph storage

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404617] [NEW] glance with rbd store fails to delete an image

2014-12-21 Thread Yogev Rabl
Public bug reported:

Description of problem:
The deletion of an image fails when Glance is configured to work with RBD 
store, with the configuration settings that are described in this manual: 
http://docs.ceph.com/docs/master/rbd/rbd-openstack/#juno

It seems like the glance client is stuck.


The CLI debug show:

# glance --debug image-delete 1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
curl -i -X HEAD -H 'User-Agent: python-glanceclient' -H 'Content-Type: 
application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' 
-H 'X-Auth-Token: {SHA1}63a8f4dccb8c42cc741bb638c08fc19dd9ccd360' 
http://10.35.160.133:9292/v1/images/1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8

HTTP/1.1 200 OK
content-length: 0
x-image-meta-id: 1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
x-image-meta-deleted: False
x-image-meta-container_format: bare
x-image-meta-checksum: 78e6077fcda0c474d42e2811c51e791f
x-image-meta-protected: False
x-image-meta-min_disk: 0
x-image-meta-min_ram: 0
x-image-meta-created_at: 2014-12-21T07:48:22
x-image-meta-size: 41126400
x-image-meta-status: active
etag: 78e6077fcda0c474d42e2811c51e791f
x-image-meta-is_public: True
date: Sun, 21 Dec 2014 07:48:50 GMT
x-image-meta-owner: fb7cd4084c6d4262a94d406f8418d155
x-image-meta-updated_at: 2014-12-21T07:48:29
content-type: text/html; charset=UTF-8
x-openstack-request-id: req-c6975244-6e0d-4b69-8a95-d3703c226a37
x-image-meta-disk_format: raw
x-image-meta-name: cirros-to-delete

curl -i -X HEAD -H 'User-Agent: python-glanceclient' -H 'Content-Type:
application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H
'Accept: */*' -H 'X-Auth-Token:
{SHA1}63a8f4dccb8c42cc741bb638c08fc19dd9ccd360'
http://10.35.160.133:9292/v1/images/1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8

HTTP/1.1 200 OK
content-length: 0
x-image-meta-id: 1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
x-image-meta-deleted: False
x-image-meta-container_format: bare
x-image-meta-checksum: 78e6077fcda0c474d42e2811c51e791f
x-image-meta-protected: False
x-image-meta-min_disk: 0
x-image-meta-min_ram: 0
x-image-meta-created_at: 2014-12-21T07:48:22
x-image-meta-size: 41126400
x-image-meta-status: active
etag: 78e6077fcda0c474d42e2811c51e791f
x-image-meta-is_public: True
date: Sun, 21 Dec 2014 07:48:50 GMT
x-image-meta-owner: fb7cd4084c6d4262a94d406f8418d155
x-image-meta-updated_at: 2014-12-21T07:48:29
content-type: text/html; charset=UTF-8
x-openstack-request-id: req-b4808dc5-1aa1-4df0-b70f-4604355b5fba
x-image-meta-disk_format: raw
x-image-meta-name: cirros-to-delete

curl -i -X DELETE -H 'User-Agent: python-glanceclient' -H 'Content-Type: 
application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' 
-H 'X-Auth-Token: {SHA1}63a8f4dccb8c42cc741bb638c08fc19dd9ccd360' 
http://10.35.160.133:9292/v1/images/1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Create a new image
2. Delete the image


Actual results:
The image deletion fails. The data is not been deleted from the Ceph storage

Expected results:
The image should be deleted

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: rbd

** Attachment added: glance's log
   
https://bugs.launchpad.net/bugs/1404617/+attachment/4285072/+files/glance-image-delete-fail.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1404617

Title:
  glance with rbd store fails to delete an image

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Description of problem:
  The deletion of an image fails when Glance is configured to work with RBD 
store, with the configuration settings that are described in this manual: 
  http://docs.ceph.com/docs/master/rbd/rbd-openstack/#juno

  It seems like the glance client is stuck.

  
  The CLI debug show:

  # glance --debug image-delete 1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
  curl -i -X HEAD -H 'User-Agent: python-glanceclient' -H 'Content-Type: 
application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' 
-H 'X-Auth-Token: {SHA1}63a8f4dccb8c42cc741bb638c08fc19dd9ccd360' 
http://10.35.160.133:9292/v1/images/1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8

  HTTP/1.1 200 OK
  content-length: 0
  x-image-meta-id: 1fc6297a-6c35-4c96-bcb1-9e9d8adee5e8
  x-image-meta-deleted: False
  x-image-meta-container_format: bare
  x-image-meta-checksum: 78e6077fcda0c474d42e2811c51e791f
  x-image-meta-protected: False
  x-image-meta-min_disk: 0
  x-image-meta-min_ram: 0
  x-image-meta-created_at: 2014-12-21T07:48:22
  x-image-meta-size: 41126400
  x-image-meta-status: active
  etag: 78e6077fcda0c474d42e2811c51e791f
  x-image-meta-is_public: True
  date: Sun, 21 Dec 2014 07:48:50 GMT
  x-image-meta-owner: fb7cd4084c6d4262a94d406f8418d155
  x-image-meta-updated_at: 2014-12-21T07:48:29
  content-type: text/html; charset=UTF-8
  x-openstack-request-id: 

[Yahoo-eng-team] [Bug 1394526] [NEW] rbd libvirt driver fails

2014-11-20 Thread Yogev Rabl
Public bug reported:

The nova rbd libvirt driver fails with the following trace:

2014-11-20 11:43:57.327 24448 ERROR nova.openstack.common.threadgroup [-] 
global name 'self' is not defined
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
x.wait()
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 168, in wait
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/event.py, line 116, in wait
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py, line 187, in switch
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 194, in main
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/service.py, line 486, 
in run_service
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
service.start()
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/service.py, line 180, in start
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1063, in 
pre_start_hook
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 5524, in 
update_available_resource
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
nodenames = set(self.driver.get_available_nodes())
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/virt/driver.py, line 1169, in 
get_available_nodes
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup stats 
= self.get_host_stats(refresh=refresh)
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 4961, in 
get_host_stats
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
return self.host_state.get_host_stats(refresh=refresh)
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 441, in 
host_state
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
self._host_state = HostState(self)
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 5355, in 
__init__
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
self.update_status()
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 5386, in 
update_status
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
disk_info_dict = self.driver.get_local_gb_info()
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 3892, in 
get_local_gb_info
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup info 
= self._get_rbd_driver().get_pool_info()
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 
NameError: global name 'self' is not defined
2014-11-20 11:43:57.327 24448 TRACE nova.openstack.common.threadgroup 


The nova configuration file is set with: 
rbd_user = cinder
rbd_secret_uuid = generated uuid

And in the libvirt section: 

[Yahoo-eng-team] [Bug 1340169] [NEW] failed to attach volumes to instances after configuration change services restart

2014-07-10 Thread Yogev Rabl
Public bug reported:

Description of problem:
The attachment of volumes failed with the errors that are available in the log 
file attached. Prior to the error I was running 8 active instances, made a 
configuration change - increased the number of workers in the Cinder, Nova  
Glance services, then restarted the services.   

Ran the command:
# nova volume-attach 6aac6fb6-ef22-48b0-b6ac-99bc94787422 
57edbc5c-8a1f-49f2-b8bf-280ab857222d auto

+--+--+
| Property | Value|
+--+--+
| device   | /dev/vdc |
| id   | 57edbc5c-8a1f-49f2-b8bf-280ab857222d |
| serverId | 6aac6fb6-ef22-48b0-b6ac-99bc94787422 |
| volumeId | 57edbc5c-8a1f-49f2-b8bf-280ab857222d |
+--+--+

cinder list output:
 
+--+---+---+--+-+--+-+
|  ID  |   Status  |  Display Name | Size | 
Volume Type | Bootable | Attached to |
+--+---+---+--+-+--+-+
| 57edbc5c-8a1f-49f2-b8bf-280ab857222d | available |   dust-bowl   | 100  | 
None|  false   | |
| 731a118d-7bd6-4538-a3b2-60543179281e | available | bowl-the-dust | 100  | 
None|  false   | |
+--+---+---+--+-+--+-+


Version-Release number of selected component (if applicable):
python-cinder-2014.1-7.el7ost.noarch
openstack-nova-network-2014.1-7.el7ost.noarch
python-novaclient-2.17.0-2.el7ost.noarch
openstack-cinder-2014.1-7.el7ost.noarch
openstack-nova-common-2014.1-7.el7ost.noarch
python-cinderclient-1.0.9-1.el7ost.noarch
openstack-nova-compute-2014.1-7.el7ost.noarch
openstack-nova-conductor-2014.1-7.el7ost.noarch
openstack-nova-scheduler-2014.1-7.el7ost.noarch
openstack-nova-api-2014.1-7.el7ost.noarch
openstack-nova-cert-2014.1-7.el7ost.noarch
openstack-nova-novncproxy-2014.1-7.el7ost.noarch
python-nova-2014.1-7.el7ost.noarch
openstack-nova-console-2014.1-7.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Launch instances
2. Increase the number of workers for the Cinder, Nova  Glance
3. Create a volume
4. Attach the volume to the instance.

Actual results:
The attachment process fail.

Expected results:
The volume should be attached to the instance.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: volume-attach-fail.log
   
https://bugs.launchpad.net/bugs/1340169/+attachment/4149555/+files/volume-attach-fail.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340169

Title:
  failed to attach volumes to instances after configuration change 
  services restart

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  The attachment of volumes failed with the errors that are available in the 
log file attached. Prior to the error I was running 8 active instances, made a 
configuration change - increased the number of workers in the Cinder, Nova  
Glance services, then restarted the services.   

  Ran the command:
  # nova volume-attach 6aac6fb6-ef22-48b0-b6ac-99bc94787422 
57edbc5c-8a1f-49f2-b8bf-280ab857222d auto

  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdc |
  | id   | 57edbc5c-8a1f-49f2-b8bf-280ab857222d |
  | serverId | 6aac6fb6-ef22-48b0-b6ac-99bc94787422 |
  | volumeId | 57edbc5c-8a1f-49f2-b8bf-280ab857222d |
  +--+--+

  cinder list output:
   
+--+---+---+--+-+--+-+
  |  ID  |   Status  |  Display Name | Size | 
Volume Type | Bootable | Attached to |
  
+--+---+---+--+-+--+-+
  | 57edbc5c-8a1f-49f2-b8bf-280ab857222d | available |   dust-bowl   | 100  |   
  None|  false   | |
  | 731a118d-7bd6-4538-a3b2-60543179281e | available | bowl-the-dust | 100  |   
  None|  false   | |
  
+--+---+---+--+-+--+-+

  
  Version-Release number of selected component (if applicable):
  python-cinder-2014.1-7.el7ost.noarch
  openstack-nova-network-2014.1-7.el7ost.noarch
  python-novaclient-2.17.0-2.el7ost.noarch
  openstack-cinder-2014.1-7.el7ost.noarch
  openstack-nova-common-2014.1-7.el7ost.noarch
  

[Yahoo-eng-team] [Bug 1340197] [NEW] Horizon doesn't notify when fail to attach a volume

2014-07-10 Thread Yogev Rabl
Public bug reported:

Description of problem:
Horizon doesn't notify when the attachment of volume process fail with Errors.
The nova-compute log show errors during the process of the volume attachment 
but the Horizon doesn't present the failure and the error. 

Version-Release number of selected component (if applicable):
python-django-horizon-2014.1-7.el7ost.noarch
openstack-nova-network-2014.1-7.el7ost.noarch
python-novaclient-2.17.0-2.el7ost.noarch
openstack-nova-common-2014.1-7.el7ost.noarch
openstack-nova-compute-2014.1-7.el7ost.noarch
openstack-nova-conductor-2014.1-7.el7ost.noarch
openstack-nova-scheduler-2014.1-7.el7ost.noarch
openstack-nova-api-2014.1-7.el7ost.noarch
openstack-nova-cert-2014.1-7.el7ost.noarch
openstack-nova-novncproxy-2014.1-7.el7ost.noarch
python-nova-2014.1-7.el7ost.noarch
openstack-nova-console-2014.1-7.el7ost.noarch


How reproducible:
100%

Steps to Reproduce:
1. Follow the step of the bug: https://bugs.launchpad.net/nova/+bug/1340169
2. In the Horizon try to attach a volume

Actual results:
The Horizon shows an info message: 
Info: Attaching volume bowl-the-dust to instance 
cougar-01-fe5510a5-c50c-46ee-9d71-6f8e41a58ecc on /dev/vdc.

The volume status changes to 'attaching' then change back to available.

Expected results:
An error should appear saying Error: the volume attachment failed

Additional info:
The Horizon log is attached.
The nova-compute log with the volume attachment error is available in the bug 
https://bugs.launchpad.net/nova/+bug/1340169

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Horizon log
   
https://bugs.launchpad.net/bugs/1340197/+attachment/4149607/+files/horizon-volume-attach-fail.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340197

Title:
  Horizon doesn't notify when fail to attach a volume

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  Horizon doesn't notify when the attachment of volume process fail with Errors.
  The nova-compute log show errors during the process of the volume attachment 
but the Horizon doesn't present the failure and the error. 

  Version-Release number of selected component (if applicable):
  python-django-horizon-2014.1-7.el7ost.noarch
  openstack-nova-network-2014.1-7.el7ost.noarch
  python-novaclient-2.17.0-2.el7ost.noarch
  openstack-nova-common-2014.1-7.el7ost.noarch
  openstack-nova-compute-2014.1-7.el7ost.noarch
  openstack-nova-conductor-2014.1-7.el7ost.noarch
  openstack-nova-scheduler-2014.1-7.el7ost.noarch
  openstack-nova-api-2014.1-7.el7ost.noarch
  openstack-nova-cert-2014.1-7.el7ost.noarch
  openstack-nova-novncproxy-2014.1-7.el7ost.noarch
  python-nova-2014.1-7.el7ost.noarch
  openstack-nova-console-2014.1-7.el7ost.noarch

  
  How reproducible:
  100%

  Steps to Reproduce:
  1. Follow the step of the bug: https://bugs.launchpad.net/nova/+bug/1340169
  2. In the Horizon try to attach a volume

  Actual results:
  The Horizon shows an info message: 
  Info: Attaching volume bowl-the-dust to instance 
cougar-01-fe5510a5-c50c-46ee-9d71-6f8e41a58ecc on /dev/vdc.

  The volume status changes to 'attaching' then change back to
  available.

  Expected results:
  An error should appear saying Error: the volume attachment failed

  Additional info:
  The Horizon log is attached.
  The nova-compute log with the volume attachment error is available in the bug 
https://bugs.launchpad.net/nova/+bug/1340169

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339564] [NEW] glance image-delete on an image with the status saving doesn't delete the image's file from store

2014-07-09 Thread Yogev Rabl
Public bug reported:

Description of problem:
After running the the scenario described in 
bugs.launchpad.net/cinder/+bug/1339545 , I've deleted two images with that were 
stuck in saving status with 
# glance image-delete image-id image-id

both of the image's files were still in the store: 
#ls -l /var/lib/glance/images
-rw-r-. 1 glance glance  2158362624 Jul  9 10:18 
d4da7dea-c94d-4c9e-a987-955a905a7fed
-rw-r-. 1 glance glance  1630994432 Jul  9 10:09 
8532ef07-3dfa-4d63-8537-033c31b16814

Version-Release number of selected component (if applicable):
python-glanceclient-0.12.0-1.el7ost.noarch
python-glance-2014.1-4.el7ost.noarch
openstack-glance-2014.1-4.el7ost.noarch


How reproducible:


Steps to Reproduce:
1. Run the scenario from bugs.launchpad.net/cinder/+bug/1339545
2. Delete the image:
# glance image-delete image-id


Actual results:
The file is still in the store.

Expected results:
The file has been deleted from the store.

Additional info:
The logs are attached -
images uid's: 
d4da7dea-c94d-4c9e-a987-955a905a7fed
8532ef07-3dfa-4d63-8537-033c31b16814

** Affects: glance
 Importance: Undecided
 Status: New

** Attachment added: delete_images.log
   
https://bugs.launchpad.net/bugs/1339564/+attachment/4148612/+files/delete_images.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1339564

Title:
  glance image-delete on an image with the status saving doesn't
  delete the image's file from store

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Description of problem:
  After running the the scenario described in 
bugs.launchpad.net/cinder/+bug/1339545 , I've deleted two images with that were 
stuck in saving status with 
  # glance image-delete image-id image-id

  both of the image's files were still in the store: 
  #ls -l /var/lib/glance/images
  -rw-r-. 1 glance glance  2158362624 Jul  9 10:18 
d4da7dea-c94d-4c9e-a987-955a905a7fed
  -rw-r-. 1 glance glance  1630994432 Jul  9 10:09 
8532ef07-3dfa-4d63-8537-033c31b16814

  Version-Release number of selected component (if applicable):
  python-glanceclient-0.12.0-1.el7ost.noarch
  python-glance-2014.1-4.el7ost.noarch
  openstack-glance-2014.1-4.el7ost.noarch

  
  How reproducible:

  
  Steps to Reproduce:
  1. Run the scenario from bugs.launchpad.net/cinder/+bug/1339545
  2. Delete the image:
  # glance image-delete image-id

  
  Actual results:
  The file is still in the store.

  Expected results:
  The file has been deleted from the store.

  Additional info:
  The logs are attached -
  images uid's: 
  d4da7dea-c94d-4c9e-a987-955a905a7fed
  8532ef07-3dfa-4d63-8537-033c31b16814

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1339564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338485] [NEW] Glance fail to alert when rados packages are not installed

2014-07-07 Thread Yogev Rabl
Public bug reported:

Description of problem:
When the Glance is configure to work with rbd backend (Ceph)  the Rados 
packages (python-ceph) are not installed the Error that the Glance's logs show 
is: 

2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils Traceback (most 
recent call last):
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/api/v1/upload_utils.py, line 99, in 
upload_data_to_store
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils store)
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/store/__init__.py, line 380, in 
store_add_to_backend
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils (location, 
size, checksum, metadata) = store.add(image_id, data, size)
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/store/rbd.py, line 319, in add
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils with 
rados.Rados(conffile=self.conf_file, rados_id=self.user) as conn:
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils AttributeError: 
'NoneType' object has no attribute 'Rados'
2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils 

Instead of catching an import error, the Glance should fail on the count
of lack of Rados packages.


Version-Release number of selected component (if applicable):
python-glance-2014.1-4.el7ost.noarch
python-glanceclient-0.12.0-1.el7ost.noarch
openstack-glance-2014.1-4.el7ost.noarch


How reproducible:
100%

Steps to Reproduce:
1. Configure the glance to work with rbd backend (see 
http://ceph.com/docs/master/rbd/rbd-openstack/?highlight=openstack). without 
installing the python-ceph packages.
2. try to create a new image.


Actual results:
The Glance catch an import error.

Expected results:
The Glance should alert that the Rados packages are missing.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1338485

Title:
  Glance fail to alert when rados packages are not installed

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Description of problem:
  When the Glance is configure to work with rbd backend (Ceph)  the Rados 
packages (python-ceph) are not installed the Error that the Glance's logs show 
is: 

  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils Traceback (most 
recent call last):
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/api/v1/upload_utils.py, line 99, in 
upload_data_to_store
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils store)
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/store/__init__.py, line 380, in 
store_add_to_backend
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils (location, 
size, checksum, metadata) = store.add(image_id, data, size)
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.7/site-packages/glance/store/rbd.py, line 319, in add
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils with 
rados.Rados(conffile=self.conf_file, rados_id=self.user) as conn:
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils AttributeError: 
'NoneType' object has no attribute 'Rados'
  2014-07-07 11:28:27.982  TRACE glance.api.v1.upload_utils 

  Instead of catching an import error, the Glance should fail on the
  count of lack of Rados packages.

  
  Version-Release number of selected component (if applicable):
  python-glance-2014.1-4.el7ost.noarch
  python-glanceclient-0.12.0-1.el7ost.noarch
  openstack-glance-2014.1-4.el7ost.noarch

  
  How reproducible:
  100%

  Steps to Reproduce:
  1. Configure the glance to work with rbd backend (see 
http://ceph.com/docs/master/rbd/rbd-openstack/?highlight=openstack). without 
installing the python-ceph packages.
  2. try to create a new image.

  
  Actual results:
  The Glance catch an import error.

  Expected results:
  The Glance should alert that the Rados packages are missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1338485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323527] [NEW] Capitalization of words is not consistent in the flavor creation window

2014-05-27 Thread Yogev Rabl
Public bug reported:

Description of problem:
In the flavor creation window, in the flavor access tab, the capitalization of 
headers All Projects  Selected projects is not consistent. 


Version-Release number of selected component (if applicable):
python-django-horizon-2014.1-6.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Go to the flavor tab.
2. create a new flavor 
3. go to the flavor access tab

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1323527

Title:
  Capitalization of words is not consistent in the flavor creation
  window

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  In the flavor creation window, in the flavor access tab, the capitalization 
of headers All Projects  Selected projects is not consistent. 

  
  Version-Release number of selected component (if applicable):
  python-django-horizon-2014.1-6.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Go to the flavor tab.
  2. create a new flavor 
  3. go to the flavor access tab

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1323527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323541] [NEW] The swap measurement unit is not specified in the CLI table

2014-05-27 Thread Yogev Rabl
Public bug reported:

Description of problem:
The measurement unit of the swap memory in the flavor is MB, unlike all the 
other disk's units, which are GB. 
This might cause a confusion in the CLI when the unit is not specified:
 
+--+---+---+--+---+--+---+-+---+
| ID   | Name  | Memory_MB | Disk | 
Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--+---+---+--+---+--+---+-+---+
| 13eec680-fa84-4c8a-98ed-51ad564bb0c6 | m1.tiny   | 512   | 1| 0   
  | 512  | 1 | 1.0 | True  |
| 3| m1.medium | 4096  | 40   | 0   
  |  | 2 | 1.0 | True  |
| 4| m1.large  | 8192  | 80   | 0   
  |  | 4 | 1.0 | True  |
| 41f44ff1-b09c-4d14-948d-ead7cf2177a9 | m1.small  | 2048  | 20   | 40  
  |  | 1 | 1.0 | True  |
| 5| m1.xlarge | 16384 | 160  | 0   
  |  | 8 | 1.0 | True  |
+--+---+---+--+---+--+---+-+---+

Version-Release number of selected component (if applicable):
openstack-nova-compute-2014.1-2.el7ost.noarch
openstack-nova-cert-2014.1-2.el7ost.noarch
openstack-nova-novncproxy-2014.1-2.el7ost.noarch
python-novaclient-2.17.0-1.el7ost.noarch
python-nova-2014.1-2.el7ost.noarch
openstack-nova-api-2014.1-2.el7ost.noarch
openstack-nova-network-2014.1-2.el7ost.noarch
openstack-nova-console-2014.1-2.el7ost.noarch
openstack-nova-scheduler-2014.1-2.el7ost.noarch
openstack-nova-conductor-2014.1-2.el7ost.noarch
openstack-nova-common-2014.1-2.el7ost.noarch


How reproducible:
100%

Steps to Reproduce:
1. add swap to a flavor
2. run the CLI command:
# nova flavor-list

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1323541

Title:
  The swap measurement unit is not specified in the CLI table

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  The measurement unit of the swap memory in the flavor is MB, unlike all the 
other disk's units, which are GB. 
  This might cause a confusion in the CLI when the unit is not specified:
   
+--+---+---+--+---+--+---+-+---+
  | ID   | Name  | Memory_MB | Disk | 
Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  
+--+---+---+--+---+--+---+-+---+
  | 13eec680-fa84-4c8a-98ed-51ad564bb0c6 | m1.tiny   | 512   | 1| 0 
| 512  | 1 | 1.0 | True  |
  | 3| m1.medium | 4096  | 40   | 0 
|  | 2 | 1.0 | True  |
  | 4| m1.large  | 8192  | 80   | 0 
|  | 4 | 1.0 | True  |
  | 41f44ff1-b09c-4d14-948d-ead7cf2177a9 | m1.small  | 2048  | 20   | 40
|  | 1 | 1.0 | True  |
  | 5| m1.xlarge | 16384 | 160  | 0 
|  | 8 | 1.0 | True  |
  
+--+---+---+--+---+--+---+-+---+

  Version-Release number of selected component (if applicable):
  openstack-nova-compute-2014.1-2.el7ost.noarch
  openstack-nova-cert-2014.1-2.el7ost.noarch
  openstack-nova-novncproxy-2014.1-2.el7ost.noarch
  python-novaclient-2.17.0-1.el7ost.noarch
  python-nova-2014.1-2.el7ost.noarch
  openstack-nova-api-2014.1-2.el7ost.noarch
  openstack-nova-network-2014.1-2.el7ost.noarch
  openstack-nova-console-2014.1-2.el7ost.noarch
  openstack-nova-scheduler-2014.1-2.el7ost.noarch
  openstack-nova-conductor-2014.1-2.el7ost.noarch
  openstack-nova-common-2014.1-2.el7ost.noarch

  
  How reproducible:
  100%

  Steps to Reproduce:
  1. add swap to a flavor
  2. run the CLI command:
  # nova flavor-list

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1323541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314102] [NEW] Horizon fail to send rebuild instance command

2014-04-29 Thread Yogev Rabl
Public bug reported:

Description of problem:
The rebuild command fails when sent from the Horizon. The error in the horizon 
log is:

2014-04-29 08:51:57,586 27987 ERROR django.request Internal Server Error: 
/dashboard/project/instances/df65444f-7af6-4a52-a49e-98116c94e76e/rebuild
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 
136, in get_response
response = response.render()
  File /usr/lib/python2.6/site-packages/django/template/response.py, line 
104, in render
self._set_content(self.rendered_content)
  File /usr/lib/python2.6/site-packages/django/template/response.py, line 81, 
in rendered_content
content = template.render(context)
  File /usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 
123, in render
return compiled_parent._render(context)
  File /usr/lib/python2.6/site-packages/django/template/base.py, line 134, in 
_render
return self.nodelist.render(context)
  File /usr/lib/python2.6/site-packages/django/template/base.py, line 823, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
return node.render(context)
  File /usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 
62, in render
result = block.nodelist.render(context)
  File /usr/lib/python2.6/site-packages/django/template/base.py, line 823, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
return node.render(context)
  File /usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 
155, in render
return self.render_template(self.template, context)
  File /usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 
137, in render_template
output = template.render(context)
  File /usr/lib/python2.6/site-packages/django/template/base.py, line 140, in 
render
return self._render(context)
  File /usr/lib/python2.6/site-packages/django/template/base.py, line 134, in 
_render
return self.nodelist.render(context)
  File /usr/lib/python2.6/site-packages/django/template/base.py, line 823, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.6/site-packages/django/template/debug.py, line 74, in 
render_node
return node.render(context)
  File /usr/lib/python2.6/site-packages/django/template/defaulttags.py, line 
186, in render
nodelist.append(node.render(context))
  File /usr/lib/python2.6/site-packages/django/template/debug.py, line 87, in 
render
output = force_unicode(output)
  File /usr/lib/python2.6/site-packages/django/utils/encoding.py, line 71, in 
force_unicode
s = unicode(s)
  File /usr/lib/python2.6/site-packages/django/forms/forms.py, line 411, in 
__unicode__
return self.as_widget()
  File /usr/lib/python2.6/site-packages/django/forms/forms.py, line 458, in 
as_widget
return widget.render(name, self.value(), attrs=attrs)
  File /usr/lib/python2.6/site-packages/django/forms/widgets.py, line 547, in 
render
options = self.render_options(choices, [value])
  File /usr/lib/python2.6/site-packages/django/forms/widgets.py, line 577, in 
render_options
output.append(self.render_option(selected_choices, option_value, 
option_label))
  File /usr/lib/python2.6/site-packages/horizon/utils/fields.py, line 127, in 
render_option
option_label = self.transform(option_label)
  File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/forms.py,
 line 34, in _image_choice_title
gb = filesizeformat(img.bytes)
AttributeError: 'NoneType' object has no attribute 'bytes'

Version-Release number of selected component (if applicable):
python-django-horizon-2013.2.3-1.el6ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. launch an instance
2. send the rebuild command from the horizon
3.

Actual results:
the action fails 

Expected results:
The command should reach the nova (at the very least).

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1314102

Title:
  Horizon fail to send rebuild instance command

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  The rebuild command fails when sent from the Horizon. The error in the 
horizon log is:

  2014-04-29 08:51:57,586 27987 ERROR django.request Internal Server Error: 
/dashboard/project/instances/df65444f-7af6-4a52-a49e-98116c94e76e/rebuild
  Traceback (most recent call last):
File /usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 
136, in get_response
  response = response.render()
File /usr/lib/python2.6/site-packages/django/template/response.py, line 
104, in render

[Yahoo-eng-team] [Bug 1313573] [NEW] nova backup fails to backup an instance with attached volume

2014-04-28 Thread Yogev Rabl
Public bug reported:

Description of problem:
An instance has an attached volume, after running the command:
# nova backup instance id backup name snapshot rotation (an integer) 
An image has been created (type backup) and the status is stuck in 'queued'. 

Version-Release number of selected component (if applicable):
openstack-nova-compute-2013.2.3-6.el6ost.noarch
openstack-nova-conductor-2013.2.3-6.el6ost.noarch
openstack-nova-novncproxy-2013.2.3-6.el6ost.noarch
openstack-nova-scheduler-2013.2.3-6.el6ost.noarch
openstack-nova-api-2013.2.3-6.el6ost.noarch
openstack-nova-cert-2013.2.3-6.el6ost.noarch

python-glance-2013.2.3-2.el6ost.noarch
python-glanceclient-0.12.0-2.el6ost.noarch
openstack-glance-2013.2.3-2.el6ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. launch an instance from a volume.
2. backup the instance.


Actual results:
The backup is stuck in queued state.

Expected results:
the backup should be available as an image in Glance.

Additional info:
The nova-compute error  the glance logs are attached.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: glance-api.log
   
https://bugs.launchpad.net/bugs/1313573/+attachment/4099226/+files/glance-api.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1313573

Title:
  nova backup fails to backup an instance with attached volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  An instance has an attached volume, after running the command:
  # nova backup instance id backup name snapshot rotation (an integer) 
  An image has been created (type backup) and the status is stuck in 'queued'. 

  Version-Release number of selected component (if applicable):
  openstack-nova-compute-2013.2.3-6.el6ost.noarch
  openstack-nova-conductor-2013.2.3-6.el6ost.noarch
  openstack-nova-novncproxy-2013.2.3-6.el6ost.noarch
  openstack-nova-scheduler-2013.2.3-6.el6ost.noarch
  openstack-nova-api-2013.2.3-6.el6ost.noarch
  openstack-nova-cert-2013.2.3-6.el6ost.noarch

  python-glance-2013.2.3-2.el6ost.noarch
  python-glanceclient-0.12.0-2.el6ost.noarch
  openstack-glance-2013.2.3-2.el6ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. launch an instance from a volume.
  2. backup the instance.

  
  Actual results:
  The backup is stuck in queued state.

  Expected results:
  the backup should be available as an image in Glance.

  Additional info:
  The nova-compute error  the glance logs are attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1313573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313707] [NEW] instance status turn to ERROR when running instance suspend

2014-04-28 Thread Yogev Rabl
Public bug reported:

Description of problem:
When trying to suspend an instance the instance's status turn to Error. 
The instance's flavor details are:

++--+
| Property   | Value|
++--+
| name   | m1.small |
| ram| 2048 |
| OS-FLV-DISABLED:disabled   | False|
| vcpus  | 1|
| extra_specs| {}   |
| swap   |  |
| os-flavor-access:is_public | True |
| rxtx_factor| 1.0  |
| OS-FLV-EXT-DATA:ephemeral  | 40   |
| disk   | 20   |
| id | 7427e83a-5f96-43af-936b-a054191482ab |
++--+

Version-Release number of selected component (if applicable):

openstack-nova-common-2013.2.3-6.el6ost.noarch
openstack-nova-console-2013.2.3-6.el6ost.noarch
openstack-nova-network-2013.2.3-6.el6ost.noarch
python-novaclient-2.15.0-4.el6ost.noarch
python-nova-2013.2.3-6.el6ost.noarch
openstack-nova-compute-2013.2.3-6.el6ost.noarch
openstack-nova-conductor-2013.2.3-6.el6ost.noarch
openstack-nova-novncproxy-2013.2.3-6.el6ost.noarch
openstack-nova-scheduler-2013.2.3-6.el6ost.noarch
openstack-nova-api-2013.2.3-6.el6ost.noarch
openstack-nova-cert-2013.2.3-6.el6ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. launch an instance from an iso image with the flavor as it is detailed above.
2. suspend the instance.


Actual results:
The instance status turns to ERROR.

Expected results:
The instance should be suspend


Additional info:
The error from the compute log is attached.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: nova compute log
   
https://bugs.launchpad.net/bugs/1313707/+attachment/4099437/+files/nova-compute.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1313707

Title:
  instance status turn to ERROR when running instance suspend

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  When trying to suspend an instance the instance's status turn to Error. 
  The instance's flavor details are:

  ++--+
  | Property   | Value|
  ++--+
  | name   | m1.small |
  | ram| 2048 |
  | OS-FLV-DISABLED:disabled   | False|
  | vcpus  | 1|
  | extra_specs| {}   |
  | swap   |  |
  | os-flavor-access:is_public | True |
  | rxtx_factor| 1.0  |
  | OS-FLV-EXT-DATA:ephemeral  | 40   |
  | disk   | 20   |
  | id | 7427e83a-5f96-43af-936b-a054191482ab |
  ++--+

  Version-Release number of selected component (if applicable):

  openstack-nova-common-2013.2.3-6.el6ost.noarch
  openstack-nova-console-2013.2.3-6.el6ost.noarch
  openstack-nova-network-2013.2.3-6.el6ost.noarch
  python-novaclient-2.15.0-4.el6ost.noarch
  python-nova-2013.2.3-6.el6ost.noarch
  openstack-nova-compute-2013.2.3-6.el6ost.noarch
  openstack-nova-conductor-2013.2.3-6.el6ost.noarch
  openstack-nova-novncproxy-2013.2.3-6.el6ost.noarch
  openstack-nova-scheduler-2013.2.3-6.el6ost.noarch
  openstack-nova-api-2013.2.3-6.el6ost.noarch
  openstack-nova-cert-2013.2.3-6.el6ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. launch an instance from an iso image with the flavor as it is detailed 
above.
  2. suspend the instance.

  
  Actual results:
  The instance status turns to ERROR.

  Expected results:
  The instance should be suspend

  
  Additional info:
  The error from the compute log is attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1313707/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1307088] [NEW] can't attach a read only volume to an instance

2014-04-13 Thread Yogev Rabl
Public bug reported:

Description of problem:

An attachment of a read only volume to an instance failed. The openstack
was installed as AIO, Cinder was configured with Netapp back end. The
following error from the nova compute log:

014-04-13 11:28:17.838 25176 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: Invalid input received: Invalid attaching mode 'rw' 
for volume 3f5828e1-77b2-4302-9cdf-486f7
0834c31.
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 122, 
in _do_dispatch
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 360, in 
decorated_function
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 88, in wrapped
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher payload)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/exception.py, line 71, in wrapped
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 244, in 
decorated_function
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher pass
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 230, in 
decorated_function
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 272, in 
decorated_function
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 259, in 
decorated_function
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 3876, in 
attach_volume
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher 
bdm.destroy(context)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 3873, in 
attach_volume
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher return 
self._attach_volume(context, instance, driver_bdm)
2014-04-13 11:28:17.838 25176 TRACE oslo.messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1297853] [NEW] failed to launch an instance from ISO image: TRACE nova.compute.manager MessagingTimeout: Timed out waiting for a reply to message ID

2014-03-26 Thread Yogev Rabl
Public bug reported:

Description of problem:
The Openstack is installed as AIO (with nova networking) on fedora 20. The 
instance was launched on a flavor that has the following parameters:  
+--+---+---+--+---+--+---+-+---+
| ID   | Name  | Memory_MB | Disk | 
Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--+---+---+--+---+--+---+-+---+

| 8ddee6ea-c7b3-4482-97b3-f4c9ca6a2c19 | m1.medium | 4096  | 40   | 40  
  |  | 2 | 1.0 | True  |
+--+---+---+--+---+--+---+-+---+

On the first try the instance status was stuck on 'spawning', even after a time 
out stopped the process. 
On the second try the instance status was changed from 'spawning' to 'Error'. 

The nova compute log:
2014-03-26 15:33:58.548 15699 DEBUG nova.compute.manager [-] An error occurred 
_heal_instance_info_cache 
/usr/lib/python2.7/site-packages/nova/compute/manager.py:4569
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager Traceback (most recent 
call last):
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 4565, in 
_heal_instance_info_cache
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager 
self._get_instance_nw_info(context, instance, use_slave=True)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 908, in 
_get_instance_nw_info
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager instance)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/network/api.py, line 94, in wrapped
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager return func(self, 
context, *args, **kwargs)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/network/api.py, line 389, in 
get_instance_nw_info
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager result = 
self._get_instance_nw_info(context, instance)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/network/api.py, line 405, in 
_get_instance_nw_info
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager nw_info = 
self.network_rpcapi.get_instance_nw_info(context, **args)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/network/rpcapi.py, line 222, in 
get_instance_nw_info
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager host=host, 
project_id=project_id)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py, line 150, in 
call
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager 
wait_for_reply=True, timeout=timeout)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/transport.py, line 90, in 
_send
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager timeout=timeout)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
409, in send
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager return 
self._send(target, ctxt, message, wait_for_reply, timeout)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
400, in _send
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager result = 
self._waiter.wait(msg_id, timeout)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
280, in wait
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager reply, ending, 
trylock = self._poll_queue(msg_id, timeout)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
220, in _poll_queue
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager message = 
self.waiters.get(msg_id, timeout)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
126, in get
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager 'to message ID %s' 
% msg_id)
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager MessagingTimeout: 
Timed out waiting for a reply to message ID 2b184725ad034655bf5e55c59a643758
2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager 

Version-Release number of selected component (if applicable):
python-novaclient-2.16.0-2.fc21.noarch

[Yahoo-eng-team] [Bug 1294013] [NEW] failed to create a new instance from a volume snapshot when volume usage reached quota limit

2014-03-18 Thread Yogev Rabl
Public bug reported:

2014-03-18 11:16:35.228 2018 TRACE nova.openstack.common.rpc.amqp raise 
exception.InvalidBDM()
2014-03-18 11:16:35.228 2018 TRACE nova.openstack.common.rpc.amqp InvalidBDM: 
Block Device Mapping is Invalid.
2014-03-18 11:16:35.228 2018 TRACE nova.openstack.common.rpc.amqp

Cinder's volume log:

2014-03-18 11:16:31.702 11891 ERROR cinder.api.middleware.fault 
[req-615ab455-ba56-4f0a-b51d-3dfc869c6e87 824aa0d15577454494ef482560b231e2 
be523c490cd2410a931e4700838ffcd4] Caught error: Maximum number of volumes 
allowed (10) exceeded
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault Traceback (most 
recent call last):
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/cinder/api/middleware/fault.py, line 77, in 
__call__
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault return 
req.get_response(self.application)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/WebOb-1.2.3-py2.6.egg/webob/request.py, line 
1296, in send
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault 
application, catch_exc_info=False)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/WebOb-1.2.3-py2.6.egg/webob/request.py, line 
1260, in call_application
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault app_iter = 
application(self.environ, start_response)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/WebOb-1.2.3-py2.6.egg/webob/dec.py, line 
144, in __call__
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py, 
line 598, in __call__
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault return 
self.app(env, start_response)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/WebOb-1.2.3-py2.6.egg/webob/dec.py, line 
144, in __call__
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/WebOb-1.2.3-py2.6.egg/webob/dec.py, line 
144, in __call__
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py,
 line 131, in __call__
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault response = 
self.app(environ, start_response)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/WebOb-1.2.3-py2.6.egg/webob/dec.py, line 
144, in __call__
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/WebOb-1.2.3-py2.6.egg/webob/dec.py, line 
130, in __call__
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault resp = 
self.call_func(req, *args, **self.kwargs)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/WebOb-1.2.3-py2.6.egg/webob/dec.py, line 
195, in call_func
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault return 
self.func(req, *args, **kwargs)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/cinder/api/openstack/wsgi.py, line 898, in 
__call__
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault 
content_type, body, accept)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/cinder/api/openstack/wsgi.py, line 946, in 
_process_stack
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault 
action_result = self.dispatch(meth, request, action_args)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/cinder/api/openstack/wsgi.py, line 1022, in 
dispatch
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault return 
method(req=request, **action_args)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/cinder/api/v1/volumes.py, line 419, in create
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault **kwargs)
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.6/site-packages/cinder/volume/api.py, line 171, in create
2014-03-18 11:16:31.702 11891 TRACE cinder.api.middleware.fault 
flow.run(context)
2014-03-18 

[Yahoo-eng-team] [Bug 1291471] [NEW] can't boot a volume from a volume that has been created from a snapshot

2014-03-12 Thread Yogev Rabl
Public bug reported:

Description of problem:
A volume that has been created from a snapshot of a volume failed to boot an 
instance with the following error: 

2014-03-12 18:03:39.790 9573 ERROR nova.compute.manager 
[req-f67dabd7-f013-483a-a386-d5a511b86be7 1654b1a85ba647df87fc9258962949fb 
87761b8cc7d34be29063ad24073b2172] [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] Instance failed block d
evice setup
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] Traceback (most recent call last):
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1387, in 
_prep_block_device
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] self._await_block_device_map_created) 
+
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 283, in 
attach_block_devices
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] block_device_mapping)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 170, in 
attach
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] connector)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
/usr/lib/python2.6/site-packages/nova/volume/cinder.py, line 176, in wrapper
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] res = method(self, ctx, volume_id, 
*args, **kwargs)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
/usr/lib/python2.6/site-packages/nova/volume/cinder.py, line 274, in 
initialize_connection
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] connector)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py, line 321, in 
initialize_connection
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] {'connector': 
connector})[1]['connection_info']
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py, line 250, in 
_action
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] return self.api.client.post(url, 
body=body)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
/usr/lib/python2.6/site-packages/cinderclient/client.py, line 210, in post
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] return self._cs_request(url, 'POST', 
**kwargs)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
/usr/lib/python2.6/site-packages/cinderclient/client.py, line 174, in 
_cs_request
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] **kwargs)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3]   File 
/usr/lib/python2.6/site-packages/cinderclient/client.py, line 157, in request
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] raise exceptions.from_response(resp, 
body)
2014-03-12 18:03:39.790 9573 TRACE nova.compute.manager [instance: 
9f1a00b6-4b88-431e-a163-feaf06e0bfe3] ClientException: The server has either 
erred or is incapable of performing the requested operation. (HTTP 500) 
(Request-ID: req-e990ac94-97d9-41f3-b1e1-ca63e7d1d2bc)

2014-03-12 18:03:40.289 9573 ERROR nova.openstack.common.rpc.amqp 
[req-f67dabd7-f013-483a-a386-d5a511b86be7 1654b1a85ba647df87fc9258962949fb 
87761b8cc7d34be29063ad24073b2172] Exception during message handling
2014-03-12 18:03:40.289 9573 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2014-03-12 18:03:40.289 9573 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
2014-03-12 18:03:40.289 9573 TRACE nova.openstack.common.rpc.amqp **args)
2014-03-12 18:03:40.289 9573 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch

[Yahoo-eng-team] [Bug 1288230] [NEW] A project shouldn't be deleted when there are instances running

2014-03-05 Thread Yogev Rabl
Public bug reported:

Description of problem:
currently, a project that has an instance (or instances) can be deleted without 
even a warning message in the Horizon by a user with an administrative 
permissions. An active project (meaning a project that has instances running) 
should have the protection from deletion. If the administrator would like to 
delete it he should delete the instances first. 

Version-Release number of selected component (if applicable):
openstack-nova-cert-2013.2.2-2.el6ost.noarch
python-novaclient-2.15.0-2.el6ost.noarch
openstack-nova-common-2013.2.2-2.el6ost.noarch
openstack-nova-api-2013.2.2-2.el6ost.noarch
openstack-nova-compute-2013.2.2-2.el6ost.noarch
openstack-nova-conductor-2013.2.2-2.el6ost.noarch
openstack-nova-novncproxy-2013.2.2-2.el6ost.noarch
openstack-nova-scheduler-2013.2.2-2.el6ost.noarch
python-nova-2013.2.2-2.el6ost.noarch
openstack-nova-console-2013.2.2-2.el6ost.noarch
openstack-nova-network-2013.2.2-2.el6ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create an new project.
2. Launch one (or more) instances. 
3. Try to delete the project with the admin.

Actual results:
The instances are still running, but they are accessible only through the admin 
- intances tab.

Expected results:
The administrator shouldn't be able to delete the project as long as there are 
instances running.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288230

Title:
  A project shouldn't be deleted when there are instances running

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  currently, a project that has an instance (or instances) can be deleted 
without even a warning message in the Horizon by a user with an administrative 
permissions. An active project (meaning a project that has instances running) 
should have the protection from deletion. If the administrator would like to 
delete it he should delete the instances first. 

  Version-Release number of selected component (if applicable):
  openstack-nova-cert-2013.2.2-2.el6ost.noarch
  python-novaclient-2.15.0-2.el6ost.noarch
  openstack-nova-common-2013.2.2-2.el6ost.noarch
  openstack-nova-api-2013.2.2-2.el6ost.noarch
  openstack-nova-compute-2013.2.2-2.el6ost.noarch
  openstack-nova-conductor-2013.2.2-2.el6ost.noarch
  openstack-nova-novncproxy-2013.2.2-2.el6ost.noarch
  openstack-nova-scheduler-2013.2.2-2.el6ost.noarch
  python-nova-2013.2.2-2.el6ost.noarch
  openstack-nova-console-2013.2.2-2.el6ost.noarch
  openstack-nova-network-2013.2.2-2.el6ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Create an new project.
  2. Launch one (or more) instances. 
  3. Try to delete the project with the admin.

  Actual results:
  The instances are still running, but they are accessible only through the 
admin - intances tab.

  Expected results:
  The administrator shouldn't be able to delete the project as long as there 
are instances running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287622] [NEW] fail to launch an instance from a volume snapshot with Gluster libgfapi

2014-03-04 Thread Yogev Rabl
Public bug reported:

Description of problem:
While testing around Bug 1020979, I've installed RHEL 6.5 on a volume (size, 50 
GB), took a snapshot of the volume and tried to launch an instance with it.

The topology of RHOS is:
- Cloud controller + compute node.
- Stand alone compute node
- Stand alone Cinder with GlusterFS back end.
- Stand alone Glance

The compute logs:

2014-03-03 17:54:49.322 9544 INFO nova.virt.libvirt.firewall 
[req-c986c6d9-3c36-4bfe-943e-a80d62d15ae1 None None] [instance: 
2ec5753e-70bc-428f-8a2c-15cd56a56400] Ensuring static filters
2014-03-03 17:54:50.554 9544 ERROR nova.openstack.common.threadgroup [-] End of 
file while reading data: Input/output error
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup 
x.wait()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line 168, in wait
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/event.py, line 116, in wait
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 187, in switch
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line 194, in main
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup result 
= function(*args, **kwargs)
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/service.py, line 65, 
in run_service
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup 
service.start()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/service.py, line 164, in start
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 802, in 
pre_start_hook
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 4886, in 
update_available_resource
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup 
nodenames = set(self.driver.get_available_nodes())
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/driver.py, line 963, in 
get_available_nodes
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup stats 
= self.get_host_stats(refresh=refresh)
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 4432, in 
get_host_stats
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup return 
self.host_state.get_host_stats(refresh=refresh)
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 386, in 
host_state
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup 
self._host_state = HostState(self)
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 4832, in 
__init__
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup 
self.update_status()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 4886, in 
update_status
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup 
self.driver.get_pci_passthrough_devices()
2014-03-03 17:54:50.554 9544 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 

[Yahoo-eng-team] [Bug 1287047] [NEW] instance snapshot creation failed: libvirtError: block copy still active: domain has active block copy job

2014-03-03 Thread Yogev Rabl
Public bug reported:

Description of problem:
A snapshot of an instance is been created with the status of 'deleted'. 
The instance was launched with an ISO image of RHEL 6.5  with the following 
flavor configuration: 
Flavor Name: m1.small
VCPUs: 1
RAM: 2048MB
Root Disk: 20
Ephemeral Disk: 40
Swap Disk: 0MB

The system topology is:
1. cloud controller with the Nova services installed (with nova network).
2. Glance stand alone server.
3. Cinder  Swift installed on the same server.
4. Nova compute stand alone.

Version-Release number of selected component (if applicable):
openstack-nova-conductor-2013.2.2-2.el6ost.noarch
openstack-nova-scheduler-2013.2.2-2.el6ost.noarch
python-django-openstack-auth-1.1.2-2.el6ost.noarch
openstack-dashboard-2013.2.2-1.el6ost.noarch
openstack-selinux-0.1.3-2.el6ost.noarch
openstack-packstack-2013.2.1-0.25.dev987.el6ost.noarch
openstack-keystone-2013.2.2-1.el6ost.noarch
openstack-nova-common-2013.2.2-2.el6ost.noarch
openstack-nova-api-2013.2.2-2.el6ost.noarch
openstack-nova-console-2013.2.2-2.el6ost.noarch
openstack-nova-network-2013.2.2-2.el6ost.noarch
openstack-nova-cert-2013.2.2-2.el6ost.noarch
openstack-dashboard-theme-2013.2.2-1.el6ost.noarch
redhat-access-plugin-openstack-4.0.0-0.el6ost.noarch
openstack-nova-compute-2013.2.2-2.el6ost.noarch
openstack-nova-novncproxy-2013.2.2-2.el6ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Launch an instance with an ISO image (with the same flavor configuration as 
above) 
2. Install the OS of the ISO on the ephemeral disk.
3. After the installation is done  the OS is up take a snapshot of the 
instance.

Actual results:
The snapshot is been created in deleted status.

Expected results:
The snapshot should be available.

Logs are attached.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: failed_instance_snapshot.log
   
https://bugs.launchpad.net/bugs/1287047/+attachment/4004901/+files/failed_instance_snapshot.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1287047

Title:
  instance snapshot creation failed: libvirtError: block copy still
  active: domain has active block copy job

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  A snapshot of an instance is been created with the status of 'deleted'. 
  The instance was launched with an ISO image of RHEL 6.5  with the following 
flavor configuration: 
  Flavor Name: m1.small
  VCPUs: 1
  RAM: 2048MB
  Root Disk: 20
  Ephemeral Disk: 40
  Swap Disk: 0MB

  The system topology is:
  1. cloud controller with the Nova services installed (with nova network).
  2. Glance stand alone server.
  3. Cinder  Swift installed on the same server.
  4. Nova compute stand alone.

  Version-Release number of selected component (if applicable):
  openstack-nova-conductor-2013.2.2-2.el6ost.noarch
  openstack-nova-scheduler-2013.2.2-2.el6ost.noarch
  python-django-openstack-auth-1.1.2-2.el6ost.noarch
  openstack-dashboard-2013.2.2-1.el6ost.noarch
  openstack-selinux-0.1.3-2.el6ost.noarch
  openstack-packstack-2013.2.1-0.25.dev987.el6ost.noarch
  openstack-keystone-2013.2.2-1.el6ost.noarch
  openstack-nova-common-2013.2.2-2.el6ost.noarch
  openstack-nova-api-2013.2.2-2.el6ost.noarch
  openstack-nova-console-2013.2.2-2.el6ost.noarch
  openstack-nova-network-2013.2.2-2.el6ost.noarch
  openstack-nova-cert-2013.2.2-2.el6ost.noarch
  openstack-dashboard-theme-2013.2.2-1.el6ost.noarch
  redhat-access-plugin-openstack-4.0.0-0.el6ost.noarch
  openstack-nova-compute-2013.2.2-2.el6ost.noarch
  openstack-nova-novncproxy-2013.2.2-2.el6ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Launch an instance with an ISO image (with the same flavor configuration 
as above) 
  2. Install the OS of the ISO on the ephemeral disk.
  3. After the installation is done  the OS is up take a snapshot of the 
instance.

  Actual results:
  The snapshot is been created in deleted status.

  Expected results:
  The snapshot should be available.

  Logs are attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1287047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278353] [NEW] a failed modification of project quota shows an error success at the same time

2014-02-10 Thread Yogev Rabl
Public bug reported:

the admin user try to edit the quota setting of a project: 
changing the amount of instances from 10 to 4 when there are 5 instances 
active. The dashboard show an error balloon that says that it was unable to 
change the quotas  the a success balloon beneath it saying success modified 
project. 

If there was an error the success balloon should not appear.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1278353

Title:
  a failed modification of project quota shows an error  success at the
  same time

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  the admin user try to edit the quota setting of a project: 
  changing the amount of instances from 10 to 4 when there are 5 instances 
active. The dashboard show an error balloon that says that it was unable to 
change the quotas  the a success balloon beneath it saying success modified 
project. 

  If there was an error the success balloon should not appear.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp