Am 10.02.2015 um 11:17 schrieb Paolo Bonzini:
On 10/02/2015 11:03, Peter Lieven wrote:
My hope was that anyone has observed this post 2.2.0 already and there
is a fix available ;-)
Can you indicate what info would be helpful debugging this?
Cmdline is:
/usr/bin/qemu-2.2.0 -enable-kvm -M pc-i440fx-2.1 -nodefaults -netdev
type=tap,id=guest19,script=no,downscript=no,ifname=tap19,vnet_hdr
-device virtio-net-pci,netdev=guest19,mac=52:54:00:80:00:55 -netdev
type=tap,id=guest20,script=no,downscript=no,ifname=tap20,vnet_hdr
-device virtio-net-pci,netdev=guest20,mac=52:54:00:80:00:6f -netdev
type=tap,id=guest21,script=no,downscript=no,ifname=tap21,vnet_hdr
-device virtio-net-pci,netdev=guest21,mac=52:54:00:80:00:75 -serial
null -parallel null -m 496 -monitor tcp:0:4011,server,nowait -vnc :11
-qmp tcp:0:3011,server,nowait -name 'gw-5000123' -boot
order=nc,menu=on -drive
index=2,media=cdrom,if=ide,cache=unsafe,aio=native,readonly=on -k de
-incoming tcp:0:5011 -pidfile /var/run/qemu/vm-115.pid -mem-path
/hugepages -mem-prealloc -rtc base=utc -usb -usbdevice tablet -no-hpet
-vga vmware -cpu qemu64
First of all (but unrelated to the bug) do not use "-cpu qemu64" with KVM.
Second, what downtime or bandwidth setting? What is the actual
downtime? Can you print time.tsc_timestamp and migration_tsc on the
destination?
I also found this 'info migrate' output:
capabilities: xbzrle: off
rdma-pin-all: off
auto-converge: on
zero-blocks: on
Migration status: completed
total time: 604 milliseconds
downtime: 187 milliseconds
setup: 1 milliseconds
transferred ram: 331529 kbytes
throughput: 4498.48 mbps
remaining ram: 0 kbytes
total ram: 525700 kbytes
duplicate: 49083 pages
normal: 82613 pages
normal bytes: 330452 kbytes
dirty sync count: 0
Peter