On Thu, 14 Apr 2011 12:44:39 +0200 carlopmart wrote: > How can I configure cluster.conf file to assign eth1 interface when vm live > migration will be required??
This is how I understood it. suppose your nodes' hostnames are host1 and host2 suppose your nodes' intracluster names are node1 and node2 when you migrate a vm from host1 to host2, this is the command issued by default with qemu: virsh migrate --live guest_vm qemu+ssh://node2/system tcp:node2 so the ssh connection is done through intracluster (qemu+ssh://node2/system) and memory transfer is done through intracluster (tcp:node2) You can also verify this, comparing RX/TX bytes on interfaces before and after the migration (for example during a live migration of a RH EL 5 quite inactive guest with 2Gb of ram, I see about 250-300 MBytes transferred) This with a config inside cluster.conf such as <vm name="guest_vm" use_virsh="1" xmlfile="/etc/libvirt/qemu/guest-vm.xml"/> Based on vm.sh in /usr/share/cluster I think you can only change both of them together (so both ssh link and image transfer link)... suppose your desired interface/bond device for migration is bound to these names: hostlive1 and hostlive2 I think you have to use the parameter migration_mapping with syntax like: memberhost:targethost,memberhost:targethost so that your cluster.conf line becomes: <vm name="guest_vm" use_virsh="1" migration_mapping="node1:hostlive1,node2:hostlive2" xmlfile="/etc/libvirt/qemu/guest-vm.xml"/> Not tried yet but it should work. You can see the meta-data for your resource agent with the command: /usr/share/cluster/vm.sh meta-data or walk through the script itself.... HIH, Gianluca _______________________________________________ rhelv6-list mailing list [email protected] https://www.redhat.com/mailman/listinfo/rhelv6-list
