I'm notifying nova - maybe they are aware of some issue that could cause
this. To reiterate - this host works as long as cinder volume is not
used? I.e. running an instance from local storage is no problem for it?
(that was what I was thinking with my previous question)

FWIW, it does not look like a db problem so far. ;-)

** Also affects: nova
   Importance: Undecided
       Status: New

** Changed in: kolla-ansible
       Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1880509

Title:
  Failed to attach volume - Connection to the hypervisor is broken on
  host

Status in kolla-ansible:
  Incomplete
Status in OpenStack Compute (nova):
  New

Bug description:
  Using Rocky centos/source containers
  Docker version - 18.06.1-ce
  Hosts are Ubuntu 18.04.4 LTS

  Receiving the following defect on nova_compute which only occurs when 
attempting to attach an existing and unattached volume to a VM.
  -  Instances are created on the same host without issue and access to the 
virtual host is accessible via Virtual Machine Manager.  For testing purposes, 
I was able to attach the same volume via VMM.
  -  All compute, block, and network services are enabled and up.
  - 'nova hypervisor-list' shows the hypervisor up
  - The same volume and other volumes attach to VMs on a different hypervisor 
without issue.
  - The only difference between hypervisor hostnames that I see between the one 
that is failing and the one that is not is that the one that is failing is 
showing a FQDN.  Note below, it specifies only the hostname, not the FQDN.
  - From nova_compute, both the host name or the FQDN resolve correctly to the 
same IP and I can ping either.
  - virt_type is kvm
  - compute_driver is libvirt.LibvirtDriver
  - kernel on both hosts (failing and not failing to attach) are 
4.15.0-101-generic

  Is this a settings issue, a permissions issue, a driver issue, or
  something else?

  From nova-compute.log:

  020-05-24 23:52:54.692 7 DEBUG oslo_concurrency.lockutils 
[req-94fdd34c-7799-4e1b-835e-78ee27236d71 240c5b223f324b5b84471039304fc530 
96d55c41d2094f679c37078b034f552e - default default] Lock 
"d51fddde-1b0c-4365-b1ec-ad508d2d0bed" released by 
"nova.compute.manager.do_reserve" :: held 0.154s inner 
/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
  2020-05-24 23:52:55.469 7 DEBUG oslo_concurrency.lockutils 
[req-94fdd34c-7799-4e1b-835e-78ee27236d71 240c5b223f324b5b84471039304fc530 
96d55c41d2094f679c37078b034f552e - default default] Lock 
"d51fddde-1b0c-4365-b1ec-ad508d2d0bed" acquired by 
"nova.compute.manager.do_attach_volume" :: waited 0.000s inner 
/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
  2020-05-24 23:52:55.470 7 INFO nova.compute.manager 
[req-94fdd34c-7799-4e1b-835e-78ee27236d71 240c5b223f324b5b84471039304fc530 
96d55c41d2094f679c37078b034f552e - default default] [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed] Attaching volume 
cadca51f-ff52-4b12-9777-4f86514e3eef to /dev/vdb
  2020-05-24 23:52:55.471 7 DEBUG nova.objects.instance 
[req-94fdd34c-7799-4e1b-835e-78ee27236d71 240c5b223f324b5b84471039304fc530 
96d55c41d2094f679c37078b034f552e - default default] Lazy-loading 'flavor' on 
Instance uuid d51fddde-1b0c-4365-b1ec-ad508d2d0bed obj_load_attr 
/var/lib/kolla/venv/lib/python2.7/site-packages/nova/objects/instance.py:1111
  2020-05-24 23:52:55.738 7 DEBUG os_brick.utils 
[req-94fdd34c-7799-4e1b-835e-78ee27236d71 240c5b223f324b5b84471039304fc530 
96d55c41d2094f679c37078b034f552e - default default] ==> 
get_connector_properties: call u"{'execute': None, 'my_ip': '192.168.3.252', 
'enforce_multipath': True, 'host': 'st1', 'root_helper': 'sudo nova-rootwrap 
/etc/nova/rootwrap.conf', 'multipath': False}" trace_logging_wrapper 
/var/lib/kolla/venv/lib/python2.7/site-packages/os_brick/utils.py:146
  2020-05-24 23:52:56.428 7 DEBUG os_brick.initiator.linuxfc 
[req-94fdd34c-7799-4e1b-835e-78ee27236d71 240c5b223f324b5b84471039304fc530 
96d55c41d2094f679c37078b034f552e - default default] No Fibre Channel support 
detected on system. get_fc_hbas 
/var/lib/kolla/venv/lib/python2.7/site-packages/os_brick/initiator/linuxfc.py:157
  2020-05-24 23:52:56.429 7 DEBUG os_brick.initiator.linuxfc 
[req-94fdd34c-7799-4e1b-835e-78ee27236d71 240c5b223f324b5b84471039304fc530 
96d55c41d2094f679c37078b034f552e - default default] No Fibre Channel support 
detected on system. get_fc_hbas 
/var/lib/kolla/venv/lib/python2.7/site-packages/os_brick/initiator/linuxfc.py:157
  2020-05-24 23:52:56.430 7 DEBUG os_brick.utils 
[req-94fdd34c-7799-4e1b-835e-78ee27236d71 240c5b223f324b5b84471039304fc530 
96d55c41d2094f679c37078b034f552e - default default] <== 
get_connector_properties: return (691ms) {'initiator': 
u'iqn.1994-05.com.redhat:cddc7297faec', 'ip': u'192.168.3.252', 'platform': 
u'x86_64', 'host': u'st1', 'do_local_attach': False, 'os_type': u'linux2', 
'multipath': False} trace_logging_wrapper 
/var/lib/kolla/venv/lib/python2.7/site-packages/os_brick/utils.py:170
  2020-05-24 23:52:56.432 7 DEBUG nova.virt.block_device 
[req-94fdd34c-7799-4e1b-835e-78ee27236d71 240c5b223f324b5b84471039304fc530 
96d55c41d2094f679c37078b034f552e - default default] [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed] Updating existing volume attachment 
record: 9a69f304-8468-47f6-8b59-5edd1e3dd75f _volume_attach 
/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/block_device.py:535
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device 
[req-94fdd34c-7799-4e1b-835e-78ee27236d71 240c5b223f324b5b84471039304fc530 
96d55c41d2094f679c37078b034f552e - default default] [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed] Driver failed to attach volume 
cadca51f-ff52-4b12-9777-4f86514e3eef at /dev/vdb: HypervisorUnavailable: 
Connection to the hypervisor is broken on host: st1
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed] Traceback (most recent call last):
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/block_device.py", 
line 563, in _volume_attach
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]     device_type=self['device_type'], 
encryption=encryption)
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 1481, in attach_volume
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]     encryption=encryption)
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 1264, in _connect_volume
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]     
vol_driver.connect_volume(connection_info, instance)
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/volume/fs.py",
 line 117, in connect_volume
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]     self._mount_options(connection_info))
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
 line 409, in mount
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]     with __manager__.get_state() as 
mount_state:
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]   File 
"/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]     return self.gen.next()
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
 line 89, in get_state
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]     raise 
exception.HypervisorUnavailable(host=CONF.host)
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed] HypervisorUnavailable: Connection to the 
hypervisor is broken on host: st1
  2020-05-24 23:53:09.078 7 ERROR nova.virt.block_device [instance: 
d51fddde-1b0c-4365-b1ec-ad508d2d0bed]

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1880509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to