[Yahoo-eng-team] [Bug 1329333] Re: BadRequest: Invalid volume: Volume status must be available or error

2016-09-24 Thread Sean McGinnis
Appears to have since been fixed indirectly.

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329333

Title:
  BadRequest: Invalid volume: Volume status must be available or error

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Invalid

Bug description:
  traceback from:
  
http://logs.openstack.org/40/99540/2/check/check-grenade-dsvm/85c496c/console.html

  
  2014-06-12 13:28:15.833 | tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern)
  2014-06-12 13:28:15.833 | 
---
  2014-06-12 13:28:15.833 | 
  2014-06-12 13:28:15.833 | Captured traceback:
  2014-06-12 13:28:15.833 | ~~~
  2014-06-12 13:28:15.833 | Traceback (most recent call last):
  2014-06-12 13:28:15.833 |   File "tempest/scenario/manager.py", line 157, 
in tearDownClass
  2014-06-12 13:28:15.833 | cls.cleanup_resource(thing, cls.__name__)
  2014-06-12 13:28:15.834 |   File "tempest/scenario/manager.py", line 119, 
in cleanup_resource
  2014-06-12 13:28:15.834 | resource.delete()
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/v1/volumes.py", line 35, in 
delete
  2014-06-12 13:28:15.834 | self.manager.delete(self)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/v1/volumes.py", line 228, in 
delete
  2014-06-12 13:28:15.834 | self._delete("/volumes/%s" % 
base.getid(volume))
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/base.py", line 162, in _delete
  2014-06-12 13:28:15.834 | resp, body = self.api.client.delete(url)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 229, in delete
  2014-06-12 13:28:15.834 | return self._cs_request(url, 'DELETE', 
**kwargs)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 187, in 
_cs_request
  2014-06-12 13:28:15.835 | **kwargs)
  2014-06-12 13:28:15.835 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 170, in 
request
  2014-06-12 13:28:15.835 | raise exceptions.from_response(resp, body)
  2014-06-12 13:28:15.835 | BadRequest: Invalid volume: Volume status must 
be available or error, but current status is: in-use (HTTP 400) (Request-ID: 
req-9337623a-e2b7-48a3-97ab-f7a4845f2cd8)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1329333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329333] Re: BadRequest: Invalid volume: Volume status must be available or error

2014-11-07 Thread Joe Gordon
This looks like a cinder bug: http://logs.openstack.org/73/127573/7/gate
/gate-grenade-
dsvm/201512f/logs/old/screen-c-vol.txt.gz#_2014-10-30_01_12_29_220

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329333

Title:
  BadRequest: Invalid volume: Volume status must be available or error

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  traceback from:
  
http://logs.openstack.org/40/99540/2/check/check-grenade-dsvm/85c496c/console.html

  
  2014-06-12 13:28:15.833 | tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern)
  2014-06-12 13:28:15.833 | 
---
  2014-06-12 13:28:15.833 | 
  2014-06-12 13:28:15.833 | Captured traceback:
  2014-06-12 13:28:15.833 | ~~~
  2014-06-12 13:28:15.833 | Traceback (most recent call last):
  2014-06-12 13:28:15.833 |   File "tempest/scenario/manager.py", line 157, 
in tearDownClass
  2014-06-12 13:28:15.833 | cls.cleanup_resource(thing, cls.__name__)
  2014-06-12 13:28:15.834 |   File "tempest/scenario/manager.py", line 119, 
in cleanup_resource
  2014-06-12 13:28:15.834 | resource.delete()
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/v1/volumes.py", line 35, in 
delete
  2014-06-12 13:28:15.834 | self.manager.delete(self)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/v1/volumes.py", line 228, in 
delete
  2014-06-12 13:28:15.834 | self._delete("/volumes/%s" % 
base.getid(volume))
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/base.py", line 162, in _delete
  2014-06-12 13:28:15.834 | resp, body = self.api.client.delete(url)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 229, in delete
  2014-06-12 13:28:15.834 | return self._cs_request(url, 'DELETE', 
**kwargs)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 187, in 
_cs_request
  2014-06-12 13:28:15.835 | **kwargs)
  2014-06-12 13:28:15.835 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 170, in 
request
  2014-06-12 13:28:15.835 | raise exceptions.from_response(resp, body)
  2014-06-12 13:28:15.835 | BadRequest: Invalid volume: Volume status must 
be available or error, but current status is: in-use (HTTP 400) (Request-ID: 
req-9337623a-e2b7-48a3-97ab-f7a4845f2cd8)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1329333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329333] Re: BadRequest: Invalid volume: Volume status must be available or error

2014-10-07 Thread Ghanshyam Mann
Invalidating for Tempest.

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329333

Title:
  BadRequest: Invalid volume: Volume status must be available or error

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  traceback from:
  
http://logs.openstack.org/40/99540/2/check/check-grenade-dsvm/85c496c/console.html

  
  2014-06-12 13:28:15.833 | tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern)
  2014-06-12 13:28:15.833 | 
---
  2014-06-12 13:28:15.833 | 
  2014-06-12 13:28:15.833 | Captured traceback:
  2014-06-12 13:28:15.833 | ~~~
  2014-06-12 13:28:15.833 | Traceback (most recent call last):
  2014-06-12 13:28:15.833 |   File "tempest/scenario/manager.py", line 157, 
in tearDownClass
  2014-06-12 13:28:15.833 | cls.cleanup_resource(thing, cls.__name__)
  2014-06-12 13:28:15.834 |   File "tempest/scenario/manager.py", line 119, 
in cleanup_resource
  2014-06-12 13:28:15.834 | resource.delete()
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/v1/volumes.py", line 35, in 
delete
  2014-06-12 13:28:15.834 | self.manager.delete(self)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/v1/volumes.py", line 228, in 
delete
  2014-06-12 13:28:15.834 | self._delete("/volumes/%s" % 
base.getid(volume))
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/base.py", line 162, in _delete
  2014-06-12 13:28:15.834 | resp, body = self.api.client.delete(url)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 229, in delete
  2014-06-12 13:28:15.834 | return self._cs_request(url, 'DELETE', 
**kwargs)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 187, in 
_cs_request
  2014-06-12 13:28:15.835 | **kwargs)
  2014-06-12 13:28:15.835 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 170, in 
request
  2014-06-12 13:28:15.835 | raise exceptions.from_response(resp, body)
  2014-06-12 13:28:15.835 | BadRequest: Invalid volume: Volume status must 
be available or error, but current status is: in-use (HTTP 400) (Request-ID: 
req-9337623a-e2b7-48a3-97ab-f7a4845f2cd8)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329333] Re: BadRequest: Invalid volume: Volume status must be available or error

2014-10-07 Thread Ghanshyam Mann
Nova libvirt driver failing  to detach volume.

n-api logs-

2014-10-06 21:13:37.560 AUDIT nova.api.openstack.compute.contrib.volumes
[req-f87a213f-6677-4288-b91e-25769f55a2f3
TestEncryptedCinderVolumes-1235148374
TestEncryptedCinderVolumes-731143481] Detach volume
ec116004-afd7-4131-9ee8-02ab666ec7bd

---
c-api logs-

Begin detaching -

2014-10-06 21:13:37.864 18627 INFO cinder.api.openstack.wsgi 
[req-57cf26e1-8cdd-4e20-943c-393aba8286fd 980965010fee4b7f800ef366726b5927 
ba5e42d2f06340058633ad1a5a84b1b1 - - -] POST 
http://127.0.0.1:8776/v1/ba5e42d2f06340058633ad1a5a84b1b1/volumes/ec116004-afd7-4131-9ee8-02ab666ec7bd/action
2014-10-06 21:13:37.865 18627 DEBUG cinder.api.openstack.wsgi 
[req-57cf26e1-8cdd-4e20-943c-393aba8286fd 980965010fee4b7f800ef366726b5927 
ba5e42d2f06340058633ad1a5a84b1b1 - - -] Action body: {"os-begin_detaching": 
null} get_method /opt/stack/new/cinder/cinder/api/openstack/wsgi.py:1008
-
Status changed to  Detaching - 

2014-10-06 21:13:38.078 18627 AUDIT cinder.api.v1.volumes 
[req-9b3ab70e-897b-4d27-80d1-89d5678a481f 980965010fee4b7f800ef366726b5927 
ba5e42d2f06340058633ad1a5a84b1b1 - - -] vol={'migration_status': None, 
'availability_zone': u'nova', 'terminated_at': None, 'updated_at': 
datetime.datetime(2014, 10, 6, 21, 13, 37), 'provider_geometry': None, 
'snapshot_id': None, 'ec2_id': None, 'mountpoint': u'/dev/vdb', 'deleted_at': 
None, 'id': u'ec116004-afd7-4131-9ee8-02ab666ec7bd', 'size': 1L, 'user_id': 
u'980965010fee4b7f800ef366726b5927', 'attach_time': 
u'2014-10-06T21:13:35.855790', 'attached_host': None, 'display_description': 
None, 'volume_admin_metadata': 
[, 
], 
'encryption_key_id': u'----', 'project_id': 
u'ba5e42d2f06340058633ad1a5a84b1b1', 'launched_at': datetime.datetime(2014, 10, 
6, 21, 13, 29), 'scheduled_at': datetime.d
 atetime(2014, 10, 6, 21, 13, 29), 'status': u'detaching', 'volume_type_id': 
u'03ce3467-70d3-442f-a93b-4bcba3ac662a', 'deleted': False, 'provider_location': 
--
n-cpu log-  error from driver while detaching volume

2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
133, in _dispatch_and_reply
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
176, in _dispatch
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher payload)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 68, in __exit__
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 274, in decorated_function
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher pass
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 68, in __exit__
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 260, in decorated_function
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-06 21:13:39.391 18441 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 303, in decor