Reviewed:  https://review.opendev.org/755799
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=dd1e6d4b0cee465fd89744e306fcd25228b3f7cc
Submitter: Zuul
Branch:    master

commit dd1e6d4b0cee465fd89744e306fcd25228b3f7cc
Author: Lee Yarwood <lyarw...@redhat.com>
Date:   Fri Oct 2 15:11:25 2020 +0100

    libvirt: Increase incremental and max sleep time during device detach
    
    Bug #1894804 outlines how DEVICE_DELETED events were often missing from
    QEMU on Focal based OpenStack CI hosts as originally seen in bug
     #1882521. This has eventually been tracked down to some undefined QEMU
    behaviour when a new device_del QMP command is received while another is
    still being processed, causing the original attempt to be aborted.
    
    We hit this race in slower OpenStack CI envs as n-cpu rather crudely
    retries attempts to detach devices using the RetryDecorator from
    oslo.service. The default incremental sleep time currently being tight
    enough to ensure QEMU is still processing the first device_del request
    on these slower CI hosts when n-cpu asks libvirt to retry the detach,
    sending another device_del to QEMU hitting the above behaviour.
    
    Additionally we have also seen the following check being hit when
    testing with QEMU >= v5.0.0. This check now rejects overlapping
    device_del requests in QEMU rather than aborting the original:
    
    https://github.com/qemu/qemu/commit/cce8944cc9efab47d4bf29cfffb3470371c3541b
    
    This change aims to avoid this situation entirely by raising the default
    incremental sleep time between detach requests from 2 seconds to 10,
    leaving enough time for the first attempt to complete. The overall
    maximum sleep time is also increased from 30 to 60 seconds.
    
    Future work will aim to entirely remove this retry logic with a libvirt
    event driven approach, polling for the the
    VIR_DOMAIN_EVENT_ID_DEVICE_REMOVED and
    VIR_DOMAIN_EVENT_ID_DEVICE_REMOVAL_FAILED events before retrying.
    
    Finally, the cleanup of unused arguments in detach_device_with_retry is
    left for a follow up change in order to keep this initial change small
    enough to quickly backport.
    
    Closes-Bug: #1882521
    Related-Bug: #1894804
    Change-Id: Ib9ed7069cef5b73033351f7a78a3fb566753970d


** Changed in: nova
       Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1882521

Title:
  Failing device detachments on Focal

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The following tests are failing consistently when deploying devstack
  on Focal in the CI, see https://review.opendev.org/734029 for detailed
  logs:

  
tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume
  
tempest.api.compute.volumes.test_attach_volume.AttachVolumeMultiAttachTest.test_resize_server_with_multiattached_volume
  
tempest.api.compute.servers.test_server_rescue.ServerStableDeviceRescueTest.test_stable_device_rescue_disk_virtio_with_volume_attached
  tearDownClass 
(tempest.api.compute.servers.test_server_rescue.ServerStableDeviceRescueTest)

  Sample extract from nova-compute log:

  Jun 08 08:48:24.384559 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
DEBUG oslo.service.loopingcall [-] Exception which is in the suggested list of 
exceptions occurred while invoking function: 
nova.virt.libvirt.guest.Guest.detach_device_with_retry.<locals>._do_wait_and_retry_detach.
 {{(pid=82495) _func 
/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py:410}}
  Jun 08 08:48:24.384862 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
DEBUG oslo.service.loopingcall [-] Cannot retry 
nova.virt.libvirt.guest.Guest.detach_device_with_retry.<locals>._do_wait_and_retry_detach
 upon suggested exception since retry count (7) reached max retry count (7). 
{{(pid=82495) _func 
/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py:416}}
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall [-] Dynamic interval looping call 
'oslo_service.loopingcall.RetryDecorator.__call__.<locals>._func' failed: 
nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to 
detach the device from the live config.
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall Traceback (most recent call last):
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, 
in _run_loop
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall     result = func(*self.args, **self.kw)
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, 
in _func
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall     return self._sleep_time
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall     self.force_reraise()
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall     six.reraise(self.type_, self.value, self.tb)
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall     raise value
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, 
in _func
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall     result = f(*args, **kwargs)
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 453, in 
_do_wait_and_retry_detach
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall     raise exception.DeviceDetachFailed(
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach 
failed for vdb: Unable to detach the device from the live config.
  Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall 
  Jun 08 08:48:24.390684 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
WARNING nova.virt.block_device [None req-8af75b5f-2587-4ce7-9523-d2902eb45a38 
tempest-ServerRescueNegativeTestJSON-1578800383 
tempest-ServerRescueNegativeTestJSON-1578800383] [instance: 
76f86b1f-8b11-44e6-b718-eda3e7e18937] Guest refused to detach volume 
6b0cac03-d6d4-48ae-bf56-06de389c0869: nova.exception.DeviceDetachFailed: Device 
detach failed for vdb: Unable to detach the device from the live config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1882521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to