Hi Eugen,

actually, I did some tests with making volume available without stopping it. I'm using CEPH and these steps produce the following results:

1) openstack volume set --state available [UUID]
- nothing changed inside both VM (volume is still connected) and CEPH
2) openstack volume set --size [new size] --state in-use [UUID]
- nothing changed inside VM (volume is still connected and has an old size)
- size of CEPH volume changed to the new value
3) during these operations I was copying a lot of data from external source and all md5 sums are the same on both VM and source 4) changes on VM happens upon any kind of power-cycle (e.g. reboot (either soft or hard): openstack server reboot [--hard] [VM uuid] )
- note: NOT after 'reboot' from inside VM

I think, all these manipilations with cinder/ceph just update internal parameters of these subsystems, without immediate effect for VMs. In order to apply for the changes, you need to power-cycle it.

From practical point of view, it's useful when you, for example, update project in batch mode, and will then manually reboot every VM, affected by the update, with minimized downtime (it's just reboot, not manual stop/update/start).

On 6/15/18 10:34 AM, Eugen Block wrote:
Hi,

did you find a solution yet?

If not, I tried to rebuild your situation with a test instance. Although the environment and the storage backend are different, I believe it still applies to your issue, at least in a general way.

I have an instance booted from volume (size 1 GB). Trying to resize the instance via Horizon dashboard works (at least you would think that), it shows a new flavor with a disk size 8 GB. But the volume has not been resized, so the instance won't notice any changes. To accomplish that, I had to shutdown the vm, set the volume state to available (you can't detach a root disk volume), then resize the volume to the size of the flavor, and then boot the vm again, now its disk has the desired size.

control:~ # openstack server stop test1
control:~ # openstack volume set --state available b832f798-e0de-4338-836a-07375f3ae3a0 control:~ # openstack volume set --size 8 b832f798-e0de-4338-836a-07375f3ae3a0 control:~ # openstack volume set --state in-use b832f798-e0de-4338-836a-07375f3ae3a0
control:~ # openstack server start test1

I should mention that I use live-migration, so during resize of an instance it migrates to another compute node.
Hope this helps!

Regards
Eugen


Zitat von Manuel Sopena Ballesteros <manuel...@garvan.org.au>:

Dear openstack community,

I have a packstack all-in-one environment and I would like to resize one of the vms. It seems like the resize process fails due to an issue with cinder

NOTE: the vm boots from volume and not from image

This is the vm I am trying to resize

[root@openstack ~(keystone_admin)]# openstack server show 7292a929-54d9-4ce6-a595-aaf93a2be320 +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field                                | Value                                              | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig                    | MANUAL                                              | | OS-EXT-AZ:availability_zone          | nova                                              | | OS-EXT-SRV-ATTR:host                 | openstack.localdomain                                              | | OS-EXT-SRV-ATTR:hypervisor_hostname  | openstack.localdomain                                              | | OS-EXT-SRV-ATTR:instance_name        | instance-0000005f                                              | | OS-EXT-STS:power_state               | Shutdown                                              | | OS-EXT-STS:task_state                | None                                              | | OS-EXT-STS:vm_state                  | error                                              | | OS-SRV-USG:launched_at               | 2018-05-14T07:24:00.000000                                              | | OS-SRV-USG:terminated_at             | None                                              |
| accessIPv4 |                                              |
| accessIPv6 |                                              |
| addresses                            | privatenetwork=192.168.1.106, 129.94.14.238 |
| config_drive |                                              |
| created                              | 2018-05-14T07:23:52Z                                              | | fault                                | {u'message': u'The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-bf6a33bd-affc-48a3-80f3-e6e1be459e7a)', u'code': 500, u'details': u'  File           | |                                      | "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 204, in decorated_function\n    return function(self, context, *args, **kwargs)\n  File "/usr/lib/python2.7/site-                                 | |                                      | packages/nova/compute/manager.py", line 3810, in resize_instance\n    self._terminate_volume_connections(context, instance, bdms)\n  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3843,   | |                                      | in _terminate_volume_connections\n    connector)\n  File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 188, in wrapper\n    res = method(self, ctx, *args, **kwargs)\n File "/usr/lib/python2.7  | |                                      | /site-packages/nova/volume/cinder.py", line 210, in wrapper\n res = method(self, ctx, volume_id, *args, **kwargs)\n  File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 416, in                | |                                      | terminate_connection\n    connector)\n  File "/usr/lib/python2.7/site-packages/cinderclient/v3/volumes.py", line 426, in terminate_connection\n    {\'connector\': connector})\n  File "/usr/lib/python2.7/site-   | |                                      | packages/cinderclient/v3/volumes.py", line 346, in _action\n resp, body = self.api.client.post(url, body=body)\n  File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 146, in post\n    return | |                                      | self._cs_request(url, \'POST\', **kwargs)\n  File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 134, in _cs_request\n    return self.request(url, method, **kwargs)\n  File "/usr/lib/python2.7  | |                                      | /site-packages/cinderclient/client.py", line 123, in request\n    raise exceptions.from_response(resp, body)\n', u'created': u'2018-05-28T07:54:40Z'}                  | | flavor                               | m1.medium (3)                                              | | hostId                               | ecef276660cd714fe626073a18c11fe1c00bec91c15516178fb6ac28     | | id                                   | 7292a929-54d9-4ce6-a595-aaf93a2be320     |
| image |                                              |
| key_name                             | None                                              | | name                                 | danrod-server                                              | | os-extended-volumes:volumes_attached | [{u'id': u'f1ac2e94-b0ed-4089-898f-5b6467fb51e3'}] | | project_id                           | d58cf22d960e4de49b71658aee642e94     |
| properties |                                              |
| security_groups                      | [{u'name': u'admin'}, {u'name': u'R-Studio Server'}]                                                    | | status                               | ERROR                                              | | updated                              | 2018-05-28T07:54:40Z                                              | | user_id                              | c412f34c353244eabecd4b6dc4d36392     | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Cinder volume logs

2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio [req-bf6a33bd-affc-48a3-80f3-e6e1be459e7a c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - default default] Failed to delete initiator iqn iqn.1994-05.com.redhat:401b935e7b19 from target. 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio Traceback (most recent call last): 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio File "/usr/lib/python2.7/site-packages/cinder/volume/targets/lio.py", line 197, in terminate_connection 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio run_as_root=True) 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio return f(*args, **kwargs) 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio File "/usr/lib/python2.7/site-packages/cinder/volume/targets/lio.py", line 52, in _execute 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio return utils.execute(*args, **kwargs) 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 123, in execute 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio return processutils.execute(*cmd, **kwargs) 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 389, in execute 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio cmd=sanitized_cmd) 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio ProcessExecutionError: Unexpected error while running command. 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool delete-initiator iqn.2010-10.org.openstack:volume-f1ac2e94-b0ed-4089-898f-5b6467fb51e3 iqn.1994-05.com.redhat:401b935e7b19 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio Exit code: 1
2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio Stdout: u''
2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio Stderr: u'Traceback (most recent call last):\n  File "/bin/cinder-rtstool", line 10, in <module>\n sys.exit(main())\n  File "/usr/lib/python2.7/site-packages/cinder/cmd/rtstool.py", line 313, in main\n    delete_initiator(target_iqn, initiator_iqn)\n File "/usr/lib/python2.7/site-packages/cinder/cmd/rtstool.py", line 143, in delete_initiator\n    target = _lookup_target(target_iqn, initiator_iqn)\n  File "/usr/lib/python2.7/site-packages/cinder/cmd/rtstool.py", line 123, in _lookup_target\n    raise RtstoolError(_(\'Could not find target %s\') % target_iqn)\ncinder.cmd.rtstool.RtstoolError: Could not find target iqn.2010-10.org.openstack:volume-f1ac2e94-b0ed-4089-898f-5b6467fb51e3\n'
2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio
2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager [req-bf6a33bd-affc-48a3-80f3-e6e1be459e7a c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - default default] Terminate volume connection failed: Failed to detach iSCSI target for volume f1ac2e94-b0ed-4089-898f-5b6467fb51e3. 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager Traceback (most recent call last): 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1493, in terminate_connection
2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager force=force)
2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 848, in terminate_connection
2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager **kwargs)
2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/targets/lio.py", line 202, in terminate_connection 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager raise exception.ISCSITargetDetachFailed(volume_id=volume['id']) 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager ISCSITargetDetachFailed: Failed to detach iSCSI target for volume f1ac2e94-b0ed-4089-898f-5b6467fb51e3.
2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager
2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server [req-bf6a33bd-affc-48a3-80f3-e6e1be459e7a c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - default default] Exception during message handling 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 133, in _process_incoming 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 150, in dispatch 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 121, in _do_dispatch 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 4404, in terminate_connection 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server force=force) 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1498, in terminate_connection 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server raise exception.VolumeBackendAPIException(data=err_msg) 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Terminate volume connection failed: Failed to detach iSCSI target for volume f1ac2e94-b0ed-4089-898f-5b6467fb51e3.

Any thoughts?

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel...@garvan.org.au<mailto:manuel...@garvan.org.au>

NOTICE
Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed.




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to