This bug is happening again in Havana release. 
we had a power outage and I sent a create command while the storage was 
unavailable and also had some other volume related commands running.

2013-10-07 18:41:06.136 8288 WARNING cinder.scheduler.host_manager 
[req-462656bc-4bb2-478a-8fa4-90ac89e1c39e c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:41:06.152 8288 ERROR cinder.volume.flows.create_volume 
[req-462656bc-4bb2-478a-8fa4-90ac89e1c39e c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:44:31.280 8288 WARNING cinder.scheduler.host_manager 
[req-65c2f4e1-71da-4340-b9f0-afdd05ccdaa9 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:44:31.281 8288 ERROR cinder.volume.flows.create_volume 
[req-65c2f4e1-71da-4340-b9f0-afdd05ccdaa9 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:44:50.730 8288 WARNING cinder.scheduler.host_manager 
[req-1c132eb5-ca74-4ab5-91dc-73c25b305165 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:44:50.731 8288 ERROR cinder.volume.flows.create_volume 
[req-1c132eb5-ca74-4ab5-91dc-73c25b305165 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:47:01.577 8288 WARNING cinder.scheduler.host_manager 
[req-538ad552-0e19-4307-bea8-10e0a35a8a36 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:47:01.578 8288 ERROR cinder.volume.flows.create_volume 
[req-538ad552-0e19-4307-bea8-10e0a35a8a36 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:47:18.421 8288 WARNING cinder.scheduler.host_manager 
[req-3a788eb8-56f5-45f6-b4fd-ade01a05cf9d c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:47:18.422 8288 ERROR cinder.volume.flows.create_volume 
[req-3a788eb8-56f5-45f6-b4fd-ade01a05cf9d c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:48:27.732 8288 WARNING cinder.scheduler.host_manager 
[req-1ef1b47a-27b8-4667-9823-2f91dcc0f29e c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:48:27.733 8288 ERROR cinder.volume.flows.create_volume 
[req-1ef1b47a-27b8-4667-9823-2f91dcc0f29e c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:48:51.125 8288 WARNING cinder.scheduler.host_manager 
[req-7a45f5ed-c6b2-4b9a-9e5f-c90d60b1bba8 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:48:51.126 8288 ERROR cinder.volume.flows.create_volume 
[req-7a45f5ed-c6b2-4b9a-9e5f-c90d60b1bba8 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:49:54.705 8288 WARNING cinder.scheduler.host_manager 
[req-5fadfa9b-6d82-4ea4-ac36-15de15aa236a c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:49:54.706 8288 ERROR cinder.volume.flows.create_volume 
[req-5fadfa9b-6d82-4ea4-ac36-15de15aa236a c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 19:31:10.716 8288 CRITICAL cinder [-] need more than 0 values to 
unpack
2013-10-07 19:37:27.334 2542 WARNING cinder.scheduler.host_manager 
[req-53603c25-424e-4c05-9eee-de5ae15fb300 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 19:37:27.350 2542 ERROR cinder.volume.flows.create_volume 
[req-53603c25-424e-4c05-9eee-de5ae15fb300 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 20:02:18.403 2542 CRITICAL cinder [-] need more than 0 values to 
unpack

[root@cougar06 ~(keystone_admin)]# cinder list 
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 1560fa00-752b-4d7b-a747-3ef9bf483692 | available |     new      |  1   |     
None    |   True   |             |
| 22c3e84c-1d9b-4a45-9244-06b3ab6c401a |  creating |     bla      |  10  |     
None    |  False   |             |
| aadc9c04-17ab-42c4-8bce-c2f63cd287fa | available |  image_new   |  1   |     
None    |   True   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+


** Changed in: cinder
       Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1053931

Title:
  Volume hangs in "creating" status even though scheduler raises "No
  valid host" exception

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When the volume creation process fails during scheduling (i.e. there
  is no appropriate host) the status in DB (and in nova volume-list
  output as a result) hangs with a "creating..." value.

  In such case to figure out that volume creation failed one should go
  and see /var/log/nova/nova-scheduler.log (which is not an obvious
  action to do). Moreover, volume stuck with "creating..." status cannot
  be deleted with nova volume-delete. To delete it one have to change
  it's status to error in DB.

  
  Simple scheduler is being used (nova.conf):

  --scheduler_driver=nova.scheduler.simple.SimpleScheduler

  
  Here is a sample output from DB:

  *************************** 3. row ***************************
           created_at: 2012-09-21 09:55:42
           updated_at: NULL
           deleted_at: NULL
              deleted: 0
                   id: 15
               ec2_id: NULL
              user_id: b0aadfc80b094d94b78d68dcdc7e8757
           project_id: 3b892f660ea2458aa9aa9c9a21352632
                 host: NULL
                 size: 1
    availability_zone: nova
          instance_id: NULL
           mountpoint: NULL
          attach_time: NULL
               status: creating
        attach_status: detached
         scheduled_at: NULL
          launched_at: NULL
        terminated_at: NULL
         display_name: NULL
  display_description: NULL
    provider_location: NULL
        provider_auth: NULL
          snapshot_id: NULL
       volume_type_id: NULL

  
  Here is a part of interest in nova-scheduler.log:

      pic': u'volume', u'filter_properties': {u'scheduler_hints': {}}, 
u'snapshot_id': None, u'volume_id': 16}, u'_context_auth_token': '<SANITIZED>', 
u'_context_is_admin': True, u'_context_project_id': u'3b    
892f660ea2458aa9aa9c9a21352632', u'_context_timestamp': 
u'2012-09-21T10:15:47.091307', u'_context_user_id': 
u'b0aadfc80b094d94b78d68dcdc7e8757', u'method': u'create_volume', 
u'_context_remote_address':     u'172.18.67.146'} from (pid=11609) _safe_log 
/usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
   15 2012-09-21 10:15:47 DEBUG nova.rpc.amqp 
[req-01f7dd30-0421-4ef3-a675-16b0cf1362eb b0aadfc80b094d94b78d68dcdc7e8757 
3b892f660ea2458aa9aa9c9a21352632] unpacked context: {'user_id': 
u'b0aadfc80b094d94b78d    68dcdc7e8757', 'roles': [u'admin'], 'timestamp': 
'2012-09-21T10:15:47.091307', 'auth_token': '<SANITIZED>', 'remote_address': 
u'172.18.67.146', 'is_admin': True, 'request_id': u'req-01f7dd30-0421-4ef3-    
a675-16b0cf1362eb', 'project_id': u'3b892f660ea2458aa9aa9c9a21352632', 
'read_deleted': u'no'} from (pid=11609) _safe_log 
/usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
   14 2012-09-21 10:15:47 WARNING nova.scheduler.manager 
[req-01f7dd30-0421-4ef3-a675-16b0cf1362eb b0aadfc80b094d94b78d68dcdc7e8757 
3b892f660ea2458aa9aa9c9a21352632] Failed to schedule_create_volume: No vali    
d host was found. Is the appropriate service running?
   13 2012-09-21 10:15:47 ERROR nova.rpc.amqp 
[req-01f7dd30-0421-4ef3-a675-16b0cf1362eb b0aadfc80b094d94b78d68dcdc7e8757 
3b892f660ea2458aa9aa9c9a21352632] Exception during message handling
   12 2012-09-21 10:15:47 TRACE nova.rpc.amqp Traceback (most recent call last):
   11 2012-09-21 10:15:47 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 253, in _process_data
   10 2012-09-21 10:15:47 TRACE nova.rpc.amqp     rval = 
node_func(context=ctxt, **node_args)
    9 2012-09-21 10:15:47 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 97, in 
_schedule
    8 2012-09-21 10:15:47 TRACE nova.rpc.amqp     context, ex, *args, **kwargs)
    7 2012-09-21 10:15:47 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
    6 2012-09-21 10:15:47 TRACE nova.rpc.amqp     self.gen.next()
    5 2012-09-21 10:15:47 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 92, in 
_schedule
    4 2012-09-21 10:15:47 TRACE nova.rpc.amqp     return driver_method(*args, 
**kwargs)
    3 2012-09-21 10:15:47 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/simple.py", line 227, in 
schedule_create_volume
    2 2012-09-21 10:15:47 TRACE nova.rpc.amqp     raise 
exception.NoValidHost(reason=msg)
    1 2012-09-21 10:15:47 TRACE nova.rpc.amqp NoValidHost: No valid host was 
found. Is the appropriate service running?
    0 2012-09-21 10:15:47 TRACE nova.rpc.amqp

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1053931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to