On Fri, Sep 02, 2022 at 01:22:28AM +0800, huang...@chinatelecom.cn wrote: > From: Hyman Huang(黄勇) <huang...@chinatelecom.cn> > > v1: > - make parameter vcpu-dirty-limit experimental > - switch dirty limit off when cancel migrate > - add cancel logic in migration test > > Please review, thanks, > > Yong > > Abstract > ======== > > This series added a new migration capability called "dirtylimit". It can > be enabled when dirty ring is enabled, and it'll improve the vCPU performance > during the process of migration. It is based on the previous patchset: > https://lore.kernel.org/qemu-devel/cover.1656177590.git.huang...@chinatelecom.cn/ > > As mentioned in patchset "support dirty restraint on vCPU", dirtylimit way of > migration can make the read-process not be penalized. This series wires up the > vcpu dirty limit and wrappers as dirtylimit capability of migration. I > introduce > two parameters vcpu-dirtylimit-period and vcpu-dirtylimit to implement the > setup > of dirtylimit during live migration. > > To validate the implementation, i tested a 32 vCPU vm live migration with > such > model: > Only dirty vcpu0, vcpu1 with heavy memory workoad and leave the rest vcpus > untouched, running unixbench on the vpcu8-vcpu15 by setup the cpu affinity as > the following command: > taskset -c 8-15 ./Run -i 2 -c 8 {unixbench test item} > > The following are results: > > host cpu: Intel(R) Xeon(R) Platinum 8378A > host interface speed: 1000Mb/s > |---------------------+--------+------------+---------------| > | UnixBench test item | Normal | Dirtylimit | Auto-converge | > |---------------------+--------+------------+---------------| > | dhry2reg | 32800 | 32786 | 25292 | > | whetstone-double | 10326 | 10315 | 9847 | > | pipe | 15442 | 15271 | 14506 | > | context1 | 7260 | 6235 | 4514 | > | spawn | 3663 | 3317 | 3249 | > | syscall | 4669 | 4667 | 3841 | > |---------------------+--------+------------+---------------| > From the data above we can draw a conclusion that vcpus that do not dirty > memory > in vm are almost unaffected during the dirtylimit migration, but the auto > converge > way does. > > I also tested the total time of dirtylimit migration with variable dirty > memory > size in vm. > > senario 1: > host cpu: Intel(R) Xeon(R) Platinum 8378A > host interface speed: 1000Mb/s > |-----------------------+----------------+-------------------| > | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) | > |-----------------------+----------------+-------------------| > | 60 | 2014 | 2131 | > | 70 | 5381 | 12590 | > | 90 | 6037 | 33545 | > | 110 | 7660 | [*] | > |-----------------------+----------------+-------------------| > [*]: This case means migration is not convergent. > > senario 2: > host cpu: Intel(R) Xeon(R) CPU E5-2650 > host interface speed: 10000Mb/s > |-----------------------+----------------+-------------------| > | dirty memory size(MB) | Dirtylimit(ms) | Auto-converge(ms) | > |-----------------------+----------------+-------------------| > | 1600 | 15842 | 27548 | > | 2000 | 19026 | 38447 | > | 2400 | 19897 | 46381 | > | 2800 | 22338 | 57149 | > |-----------------------+----------------+-------------------| > Above data shows that dirtylimit way of migration can also reduce the total > time of migration and it achieves convergence more easily in some case. > > In addition to implement dirtylimit capability itself, this series > add 3 tests for migration, aiming at playing around for developer simply: > 1. qtest for dirty limit migration > 2. support dirty ring way of migration for guestperf tool > 3. support dirty limit migration for guestperf tool
Yong, I should have asked even earlier - just curious whether you have started using this in production systems? It's definitely not required for any patchset to be merged, but it'll be very useful (and supportive) information to have if there's proper testing beds applied already. Thanks, -- Peter Xu