Patch proposed against os-brick here [0] [0] https://review.openstack.org/#/c/638639/
** Also affects: os-brick Importance: Undecided Status: New ** Changed in: os-brick Assignee: (unassigned) => Sahid Orentino (sahid-ferdjaoui) ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1815844 Title: iscsi multipath dm-N device only used on first volume attachment Status in OpenStack nova-compute charm: Invalid Status in OpenStack Compute (nova): Invalid Status in os-brick: New Bug description: With nova-compute from cloud:xenial-queens and use-multipath=true iscsi multipath is configured and the dm-N devices used on the first attachment but subsequent attachments only use a single path. The back-end storage is a Purestorage array. The multipath.conf is attached The issue is easily reproduced as shown below: jog@pnjostkinfr01:~⟫ openstack volume create pure2 --size 10 --type pure +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-02-13T23:07:40.000000 | | description | None | | encrypted | False | | id | e286161b-e8e8-47b0-abe3-4df411993265 | | migration_status | None | | multiattach | False | | name | pure2 | | properties | | | replication_status | None | | size | 10 | | snapshot_id | None | | source_volid | None | | status | creating | | type | pure | | updated_at | None | | user_id | c1fa4ae9a0b446f2ba64eebf92705d53 | +---------------------+--------------------------------------+ jog@pnjostkinfr01:~⟫ openstack volume show pure2 +--------------------------------+--------------------------------------+ | Field | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-02-13T23:07:40.000000 | | description | None | | encrypted | False | | id | e286161b-e8e8-47b0-abe3-4df411993265 | | migration_status | None | | multiattach | False | | name | pure2 | | os-vol-host-attr:host | cinder@cinder-pure#cinder-pure | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 9be499fd1eee48dfb4dc6faf3cc0a1d7 | | properties | | | replication_status | None | | size | 10 | | snapshot_id | None | | source_volid | None | | status | available | | type | pure | | updated_at | 2019-02-13T23:07:41.000000 | | user_id | c1fa4ae9a0b446f2ba64eebf92705d53 | +--------------------------------+--------------------------------------+ Add the volume to an instance: jog@pnjostkinfr01:~⟫ openstack server add volume T1 pure2 jog@pnjostkinfr01:~⟫ openstack server show T1 +-------------------------------------+----------------------------------------------------------+ | Field | Value | +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | pnjostkcompps1 | | OS-EXT-SRV-ATTR:hypervisor_hostname | pnjostkcompps1.maas | | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-02-08T22:08:49.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | test-net=192.168.0.3 | | config_drive | | | created | 2019-02-08T22:08:29Z | | flavor | test (986ce042-27e5-4a45-8edf-3df704c7db6f) | | hostId | 50e26a44ba01548369a53578c817e7e1d99aed184261d203353840d3 | | id | dfe2704c-8419-41e8-97c4-53f3e8ad00a3 | | image | 0db099d0-9d72-4d15-878c-b86b439d6a99 | | key_name | None | | name | T1 | | progress | 0 | | project_id | 9be499fd1eee48dfb4dc6faf3cc0a1d7 | | properties | | | security_groups | name='default' | | status | ACTIVE | | updated | 2019-02-08T23:14:15Z | | user_id | c1fa4ae9a0b446f2ba64eebf92705d53 | | volumes_attached | id='e286161b-e8e8-47b0-abe3-4df411993265' | +-------------------------------------+----------------------------------------------------------+ Check the device name used in the libvirt domain xml: <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source dev='/dev/dm-0'/> ## NOTE multipath device <backingStore/> <target dev='vdb' bus='virtio'/> <serial>e286161b-e8e8-47b0-abe3-4df411993265</serial> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> Show the dm device and it's paths: ubuntu@pnjostkcompps1:/var/log/nova$ sudo dmsetup info /dev/dm-0 Name: 3624a9370150c5d6aef724e2d00012029 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 2 Event number: 0 Major, minor: 252, 0 Number of targets: 1 UUID: mpath-3624a9370150c5d6aef724e2d00012029 ubuntu@pnjostkcompps1:/var/log/nova$ sudo dmsetup ls --tree 3624a9370150c5d6aef724e2d00012029 (252:0) ├─ (8:64) ├─ (8:48) ├─ (8:32) └─ (8:16) ubuntu@pnjostkcompps1:/var/log/nova$ sudo multipath -ll 3624a9370150c5d6aef724e2d00012029 dm-0 PURE,FlashArray size=10G features='0' hwhandler='1 alua' wp=rw `-+- policy='queue-length 0' prio=50 status=active |- 19:0:0:1 sdb 8:16 active ready running |- 20:0:0:1 sdc 8:32 active ready running |- 21:0:0:1 sdd 8:48 active ready running `- 22:0:0:1 sde 8:64 active ready running Remove the volume: jog@pnjostkinfr01:~⟫ openstack server remove volume T1 pure2 jog@pnjostkinfr01:~⟫ openstack server show T1 +-------------------------------------+----------------------------------------------------------+ | Field | Value | +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | pnjostkcompps1 | | OS-EXT-SRV-ATTR:hypervisor_hostname | pnjostkcompps1.maas | | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-02-08T22:08:49.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | test-net=192.168.0.3 | | config_drive | | | created | 2019-02-08T22:08:29Z | | flavor | test (986ce042-27e5-4a45-8edf-3df704c7db6f) | | hostId | 50e26a44ba01548369a53578c817e7e1d99aed184261d203353840d3 | | id | dfe2704c-8419-41e8-97c4-53f3e8ad00a3 | | image | 0db099d0-9d72-4d15-878c-b86b439d6a99 | | key_name | None | | name | T1 | | progress | 0 | | project_id | 9be499fd1eee48dfb4dc6faf3cc0a1d7 | | properties | | | security_groups | name='default' | | status | ACTIVE | | updated | 2019-02-08T23:14:15Z | | user_id | c1fa4ae9a0b446f2ba64eebf92705d53 | | volumes_attached | | +-------------------------------------+----------------------------------------------------------+ Add the volume back: Check the device name used in the libvirt domain xml: <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source dev='/dev/sdb'/> ## NOTE single path device <backingStore/> <target dev='vdb' bus='virtio'/> <serial>e286161b-e8e8-47b0-abe3-4df411993265</serial> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> Nova log: 2019-02-13 23:19:09.472 45238 INFO nova.compute.manager [req-cfbbc316-b456-4a03-8742-937559cd1de1 c1fa4ae9a0b446f2ba64eebf92705d53 9be499fd1eee48dfb4dc6faf3cc0a1d7 - e69140fe01214a39bcc6560b7b2e70e0 e69140fe01214a39bcc6560b7b2e70e0] [instance: dfe2704c-8419-41e8-97c4-53f3e8ad00a3] Attaching volume e286161b-e8e8-47b0-abe3-4df411993265 to /dev/vdb 2019-02-13 23:19:10.896 45238 INFO os_brick.initiator.connectors.iscsi [req-cfbbc316-b456-4a03-8742-937559cd1de1 c1fa4ae9a0b446f2ba64eebf92705d53 9be499fd1eee48dfb4dc6faf3cc0a1d7 - e69140fe01214a39bcc6560b7b2e70e0 e69140fe01214a39bcc6560b7b2e70e0] Trying to connect to iSCSI portal 192.168.19.20:3260 2019-02-13 23:19:10.897 45238 INFO os_brick.initiator.connectors.iscsi [req-cfbbc316-b456-4a03-8742-937559cd1de1 c1fa4ae9a0b446f2ba64eebf92705d53 9be499fd1eee48dfb4dc6faf3cc0a1d7 - e69140fe01214a39bcc6560b7b2e70e0 e69140fe01214a39bcc6560b7b2e70e0] Trying to connect to iSCSI portal 192.168.19.21:3260 2019-02-13 23:19:10.899 45238 INFO os_brick.initiator.connectors.iscsi [req-cfbbc316-b456-4a03-8742-937559cd1de1 c1fa4ae9a0b446f2ba64eebf92705d53 9be499fd1eee48dfb4dc6faf3cc0a1d7 - e69140fe01214a39bcc6560b7b2e70e0 e69140fe01214a39bcc6560b7b2e70e0] Trying to connect to iSCSI portal 192.168.19.22:3260 2019-02-13 23:19:10.900 45238 INFO os_brick.initiator.connectors.iscsi [req-cfbbc316-b456-4a03-8742-937559cd1de1 c1fa4ae9a0b446f2ba64eebf92705d53 9be499fd1eee48dfb4dc6faf3cc0a1d7 - e69140fe01214a39bcc6560b7b2e70e0 e69140fe01214a39bcc6560b7b2e70e0] Trying to connect to iSCSI portal 192.168.19.23:3260 2019-02-13 23:19:11.409 45238 WARNING os_brick.initiator.connectors.iscsi [req-cfbbc316-b456-4a03-8742-937559cd1de1 c1fa4ae9a0b446f2ba64eebf92705d53 9be499fd1eee48dfb4dc6faf3cc0a1d7 - e69140fe01214a39bcc6560b7b2e70e0 e69140fe01214a39bcc6560b7b2e70e0] Couldn't find iscsi sessions because iscsiadm err: iscsiadm: No active sessions. 2019-02-13 23:19:11.446 45238 WARNING os_brick.initiator.connectors.iscsi [req- cfbbc316-b456-4a03-8742-937559cd1de1 c1fa4ae9a0b446f2ba64eebf92705d53 9be499fd1eee48dfb4dc6faf3cc0a1d7 - e69140fe01214a39bcc6560b7b2e70e0 e69140fe01214a39bcc6560b7b2e70e0] Couldn't find iscsi sessions because iscsiadm err: iscsiadm: No active sessions. 2019-02-13 23:19:11.488 45238 WARNING os_brick.initiator.connectors.iscsi [req- cfbbc316-b456-4a03-8742-937559cd1de1 c1fa4ae9a0b446f2ba64eebf92705d53 9be499fd1eee48dfb4dc6faf3cc0a1d7 - e69140fe01214a39bcc6560b7b2e70e0 e69140fe01214a39bcc6560b7b2e70e0] Couldn't find iscsi sessions because iscsiadm err: iscsiadm: No active sessions. 2019-02-13 23:19:11.526 45238 WARNING os_brick.initiator.connectors.iscsi [req- cfbbc316-b456-4a03-8742-937559cd1de1 c1fa4ae9a0b446f2ba64eebf92705d53 9be499fd1eee48dfb4dc6faf3cc0a1d7 - e69140fe01214a39bcc6560b7b2e70e0 e69140fe01214a39bcc6560b7b2e70e0] Couldn't find iscsi sessions because iscsiadm err: iscsiadm: No active sessions. 2019-02-13 23:19:16.483 45238 INFO nova.compute.resource_tracker [req- 4e84cf0b-619b-44a0-8ea8-389ae9725297 - - - - -] Final resource view: name=pnjostkcompps1.maas phys_ram=64388MB used_ram=16872MB phys_disk=274GB used_disk=20GB total_vcpus=24 used_vcpus=4 pci_stats=[] Multipath device is still configured but not used by nova: ubuntu@pnjostkcompps1:/var/log/nova$ sudo iscsiadm -m node 192.168.19.20:3260,-1 iqn.2010-06.com.purestorage:flasharray.401a4a5a9b723cc8 192.168.19.23:3260,-1 iqn.2010-06.com.purestorage:flasharray.401a4a5a9b723cc8 192.168.19.22:3260,-1 iqn.2010-06.com.purestorage:flasharray.401a4a5a9b723cc8 192.168.19.21:3260,-1 iqn.2010-06.com.purestorage:flasharray.401a4a5a9b723cc8 ubuntu@pnjostkcompps1:/var/log/nova$ sudo dmsetup info /dev/dm-0 Name: 3624a9370150c5d6aef724e2d00012029 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 0 Event number: 0 Major, minor: 252, 0 Number of targets: 1 UUID: mpath-3624a9370150c5d6aef724e2d00012029 ubuntu@pnjostkcompps1:/var/log/nova$ sudo dmsetup ls --tree 3624a9370150c5d6aef724e2d00012029 (252:0) ├─ (8:64) ├─ (8:48) ├─ (8:32) └─ (8:16) ubuntu@pnjostkcompps1:/var/log/nova$ sudo multipath -ll 3624a9370150c5d6aef724e2d00012029 dm-0 PURE,FlashArray size=10G features='0' hwhandler='1 alua' wp=rw `-+- policy='queue-length 0' prio=50 status=active |- 23:0:0:1 sdb 8:16 active ready running |- 24:0:0:1 sdc 8:32 active ready running |- 25:0:0:1 sdd 8:48 active ready running `- 26:0:0:1 sde 8:64 active ready running To manage notifications about this bug go to: https://bugs.launchpad.net/charm-nova-compute/+bug/1815844/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp