>>> On 16.01.15 at 11:46, <andrew.coop...@citrix.com> wrote: > On 16/01/15 10:29, Li, Liang Z wrote: >> I found the restore process of the live migration is quit long, so I try to > find out what's going on. >> By debugging, I found the most time consuming process is restore the VM's > MTRR MSR, >> The process is done in the function hvm_load_mtrr_msr(), it will call the >> memory_type_changed(), which eventually call the time consuming function >> flush_all(). >> >> All this is caused by adding the memory_type_changed in your patch, here is > the link >> http://lists.xen.org/archives/html/xen-devel/2014-03/msg03792.html, >> >> I am not sure if it's necessary to call flush_all, even it's necessary, > call the function >> hvm_load_mtrr_msr one time will cause dozens call of flush_all, and each > call of the >> flush_all function will consume about 8 milliseconds, in my test > environment, the VM >> has 4 VCPUs, the hvm_load_mtrr_msr() will be called four times, and totally > consumes >> about 500 milliseconds. Obviously, there are too many flush_all calls. >> >> I think something should be done to solve this issue, do you think so? > > The flush_all() cant be avoided completely, as it is permitted to use > sethvmcontext on an already-running VM. In this case, the flush > certainly does need to happen if altering the MTRRs has had a real > effect on dirty cache lines.
Plus the actual functions calling memory_type_changed() in mtrr.c can also be called while the VM is already running. > However, having a batching mechanism across hvm_load_mtrr_msr() with a > single flush at the end seems like a wise move. And that shouldn't be very difficult to achieve. Furthermore perhaps it would be possible to check whether the VM did run at all already, and if it didn't we could avoid the flush altogether in the context load case? Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel