Hi, On Live Migration, I want to update the domain.xml with the required network interface definition for the target node. For code, please see the prototype [2]
I'm reaching our for feedback about the right way to implement this! Use Cases/Problem: ================= #1 Live Migration with Neutron Macvtap agent (see bug [1]) #2 Live Migration cross compute nodes that run different l2 agents for agent transitioning. More details on the use cases see further below. Today, both is not working! Reason: To get the interface information for the target node, nova needs to query Neutron. But Neutron can return this information only for the case, when the binding:host_id attribute of the corresponding port is set to the target host. This update happens during live migration process, but it happens too late - in post_live_migration. I need this kind of information in pre_live_migration! Proposal ======== Update the portbinding in pre livemigration instead of in post_live_migration. Then the virt driver can just query the ports for the instance to receive the network interface information for the target node and update the migration xml accordingly. I posted a working prototype here [2] Open Questions: * Is there a reason for doing the portbinding after migration? Some races or so? * Where to do the cleanup on a failed migration (set the binding:host_id to the migration source again)? * Status of the port during migraton. binding.host_id = dest_host * ovs-hybrid plug: Status = Active * ovs-non-hybrid, lb, macvtap plug: Status = BUILD (Goes to active as soon as libvirt created the instance container and with it the network device on the target. Is this a problem? Alternatives ============ * Let Neutron implement a new API to request the port details for a certain migration target * without changing it * storing it internally as additional in port.migration_port or so * Allow portbinding to 2 hosts in parallel (rkukura proposed a patchset some time ago - need to contact him) More details on use Cases/Problems: ================================= #1 For correct live migration with the Neutron Macvtap agent, the same physical_interface_mapping must be deployed on each node. If one node wants to use another mapping, migration fails or the instances is migrated into a wrong network. This happens, as the for macvtap, the interface name to place the macvtap upon is hard coded into the domain.xml. For proper migration, I must be able to specifiy the interface that is used on the target side! More details see [1] #2 Live Migration cross compute nodes that run different l2 agents. E.g. one node runs the ovs agent, another one runs the lb agent. I want to be able to live migrate between those 2 nodes. This could be interesting as transition strategy from one l2 agent to another one without the need of shutting down an instance. (Assuming the ML2 Neutron plugin is being used) Any feedback is welcome! Thank you! [1] https://bugs.launchpad.net/neutron/+bug/1550400 [2] https://review.openstack.org/#/c/297100/ -- ----- Andreas (IRC: scheuran) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
