On 04/08/2015 02:34 PM, Xiao Guangrong wrote:
We noticed that KVM keeps tracking dirty for the memslots when live migration failed which causes bad performance due to huge page mapping disallowed for this kind of memslot It is caused by slot flags does not properly sync-ed between Qemu and KVM. Current code doing slot update depends on slot->flags which hopes to omit unnecessary ioctl. However, slot->flags only reflects the stauts of corresponding memory region, vmsave and live migration do dirty tracking which overset KVM_MEM_LOG_DIRTY_PAGES for the slot. That causes the slot status recorded in the flags does not exactly match the stauts in kernel. We fixed it by introducing slot->is_dirty_logging which indicates the dirty status in kernel so that it helps us to sync the status between userspace and kernel Wanpeng Li <wanpeng...@linux.intel.com>
Sorry for the typo :( , this should be: Reported-by: Wanpeng Li <wanpeng...@linux.intel.com>