In that case, this sounds like a bug to me related to lvm volumes. You should 
check the nova-compute.log from both hosts and the nova-conductor.log. If it 
isn’t obvious what the problem is, you should open a bug and attach as much 
info as possible.

Vish

On Jan 16, 2014, at 8:04 AM, Dan Genin <daniel.ge...@jhuapl.edu> wrote:

> Thank you for replying, Vish. I did sync and verified that the file was 
> written to the host disk by mounting the LVM volume on the host.
> 
> When I tried live migration I got a Horizon blurb "Error: Failed to live 
> migrate instance to host" but there were no errors in syslog.
> 
> I have been able to successfully migrate a Qcow2 backed instance.
> Dan
> 
> On 01/16/2014 03:18 AM, Vishvananda Ishaya wrote:
>> This is probably more of a usage question, but I will go ahead and answer it.
>> 
>> If you are writing to the root drive you may need to run the sync command a 
>> few times to make sure that the data has been flushed to disk before you 
>> kick off the migration.
>> 
>> The confirm resize step should be deleting the old data, but there may be a 
>> bug in the lvm backend if this isn’t happening. Live(block) migration will 
>> probably be a bit more intuitive.
>> 
>> Vish
>> On Jan 15, 2014, at 2:40 PM, Dan Genin <daniel.ge...@jhuapl.edu> wrote:
>> 
>>> I think this qualifies as a development question but please let me know if 
>>> I am wrong.
>>> 
>>> I have been trying to test instance migration in devstack by setting up a 
>>> multi-node devstack following directions at 
>>> http://devstack.org/guides/multinode-lab.html. I tested that indeed there 
>>> were multiple availability zones and that it was possible to create 
>>> instances in each. The I tried migrating an instance from one compute node 
>>> to another using the Horizon interface (I could not find a way to confirm 
>>> migration, which is a necessary step, from the command line). I created a 
>>> test file on the instance's ephemeral disk, before migrating it, to verify 
>>> that the data was moved to the destination compute node. After migration, I 
>>> observed an instance with the same id active on the destination node but 
>>> the test file was not present.
>>> 
>>> Perhaps I misunderstand how migration is supposed to work but I expected 
>>> that the data on the ephemeral disk would be migrated with the instance. I 
>>> suppose it could take some time for the ephemeral disk to be copied but 
>>> then I would not expect the instance to become active on the destination 
>>> node before the copy operation was complete.
>>> 
>>> I also noticed that the ephemeral disk on the source compute node was not 
>>> removed after the instance was migrated, although, the instance directory 
>>> was. Nor was the disk removed after the instance was destroyed. I was using 
>>> LVM backend for my tests. 
>>> 
>>> I can provide more information about my setup but I just wanted to check 
>>> whether I was doing (or expecting) something obviously stupid.
>>> 
>>> Thank you,
>>> Dan
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to